Mar 20 06:49:26 crc systemd[1]: Starting Kubernetes Kubelet... Mar 20 06:49:26 crc restorecon[4810]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.162268 5136 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170440 5136 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170487 5136 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170508 5136 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170522 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170533 5136 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170541 5136 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170553 5136 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170563 5136 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170573 5136 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170582 5136 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170590 5136 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170599 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170607 5136 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170615 5136 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170622 5136 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170630 5136 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170638 5136 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170656 5136 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170666 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170674 5136 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170683 5136 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170691 5136 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170701 5136 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170710 5136 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170718 5136 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170727 5136 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170734 5136 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170742 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170750 5136 feature_gate.go:330] unrecognized feature gate: Example Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170758 5136 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170765 5136 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170773 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170780 5136 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170788 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170796 5136 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170803 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170842 5136 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170851 5136 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170859 5136 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170869 5136 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170881 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170891 5136 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170900 5136 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170909 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170919 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170932 5136 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170942 5136 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170952 5136 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170961 5136 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170972 5136 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170982 5136 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170996 5136 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171011 5136 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171022 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171032 5136 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171045 5136 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171054 5136 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171065 5136 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171074 5136 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171083 5136 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171091 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171099 5136 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171107 5136 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171115 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171124 5136 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171133 5136 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171142 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171150 5136 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171159 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171167 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171176 5136 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172186 5136 flags.go:64] FLAG: --address="0.0.0.0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172214 5136 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172234 5136 flags.go:64] FLAG: --anonymous-auth="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172247 5136 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172261 5136 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172273 5136 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172295 5136 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172309 5136 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172321 5136 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172332 5136 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172347 5136 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172359 5136 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172370 5136 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172383 5136 flags.go:64] FLAG: --cgroup-root="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172392 5136 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172401 5136 flags.go:64] FLAG: --client-ca-file="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172410 5136 flags.go:64] FLAG: --cloud-config="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172418 5136 flags.go:64] FLAG: --cloud-provider="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172427 5136 flags.go:64] FLAG: --cluster-dns="[]" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172437 5136 flags.go:64] FLAG: --cluster-domain="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172446 5136 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172455 5136 flags.go:64] FLAG: --config-dir="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172463 5136 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172473 5136 flags.go:64] FLAG: --container-log-max-files="5" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172483 5136 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172492 5136 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172502 5136 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172511 5136 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172520 5136 flags.go:64] FLAG: --contention-profiling="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172528 5136 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172537 5136 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172547 5136 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172556 5136 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172567 5136 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172577 5136 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172586 5136 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172595 5136 flags.go:64] FLAG: --enable-load-reader="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172605 5136 flags.go:64] FLAG: --enable-server="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172614 5136 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172627 5136 flags.go:64] FLAG: --event-burst="100" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172636 5136 flags.go:64] FLAG: --event-qps="50" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172645 5136 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172654 5136 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172663 5136 flags.go:64] FLAG: --eviction-hard="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172673 5136 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172682 5136 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172692 5136 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172702 5136 flags.go:64] FLAG: --eviction-soft="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172711 5136 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172719 5136 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172728 5136 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172737 5136 flags.go:64] FLAG: --experimental-mounter-path="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172748 5136 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172757 5136 flags.go:64] FLAG: --fail-swap-on="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172766 5136 flags.go:64] FLAG: --feature-gates="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172776 5136 flags.go:64] FLAG: --file-check-frequency="20s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172785 5136 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172795 5136 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172804 5136 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172846 5136 flags.go:64] FLAG: --healthz-port="10248" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172856 5136 flags.go:64] FLAG: --help="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172865 5136 flags.go:64] FLAG: --hostname-override="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172873 5136 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172886 5136 flags.go:64] FLAG: --http-check-frequency="20s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172899 5136 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172910 5136 flags.go:64] FLAG: --image-credential-provider-config="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172920 5136 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172932 5136 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172943 5136 flags.go:64] FLAG: --image-service-endpoint="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172954 5136 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172965 5136 flags.go:64] FLAG: --kube-api-burst="100" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172976 5136 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172989 5136 flags.go:64] FLAG: --kube-api-qps="50" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173000 5136 flags.go:64] FLAG: --kube-reserved="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173011 5136 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173022 5136 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173033 5136 flags.go:64] FLAG: --kubelet-cgroups="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173043 5136 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173054 5136 flags.go:64] FLAG: --lock-file="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173063 5136 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173072 5136 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173081 5136 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173096 5136 flags.go:64] FLAG: --log-json-split-stream="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173105 5136 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173114 5136 flags.go:64] FLAG: --log-text-split-stream="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173123 5136 flags.go:64] FLAG: --logging-format="text" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173132 5136 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173141 5136 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173150 5136 flags.go:64] FLAG: --manifest-url="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173159 5136 flags.go:64] FLAG: --manifest-url-header="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173170 5136 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173179 5136 flags.go:64] FLAG: --max-open-files="1000000" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173190 5136 flags.go:64] FLAG: --max-pods="110" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173199 5136 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173208 5136 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173218 5136 flags.go:64] FLAG: --memory-manager-policy="None" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173226 5136 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173235 5136 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173244 5136 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173254 5136 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173273 5136 flags.go:64] FLAG: --node-status-max-images="50" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173282 5136 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173291 5136 flags.go:64] FLAG: --oom-score-adj="-999" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173300 5136 flags.go:64] FLAG: --pod-cidr="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173308 5136 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173323 5136 flags.go:64] FLAG: --pod-manifest-path="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173332 5136 flags.go:64] FLAG: --pod-max-pids="-1" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173341 5136 flags.go:64] FLAG: --pods-per-core="0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173350 5136 flags.go:64] FLAG: --port="10250" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173359 5136 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173370 5136 flags.go:64] FLAG: --provider-id="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173379 5136 flags.go:64] FLAG: --qos-reserved="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173387 5136 flags.go:64] FLAG: --read-only-port="10255" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173396 5136 flags.go:64] FLAG: --register-node="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173405 5136 flags.go:64] FLAG: --register-schedulable="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173414 5136 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173428 5136 flags.go:64] FLAG: --registry-burst="10" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173438 5136 flags.go:64] FLAG: --registry-qps="5" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173448 5136 flags.go:64] FLAG: --reserved-cpus="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173456 5136 flags.go:64] FLAG: --reserved-memory="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173468 5136 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173477 5136 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173486 5136 flags.go:64] FLAG: --rotate-certificates="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173495 5136 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173503 5136 flags.go:64] FLAG: --runonce="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173512 5136 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173521 5136 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173531 5136 flags.go:64] FLAG: --seccomp-default="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173539 5136 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173548 5136 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173558 5136 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173567 5136 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173576 5136 flags.go:64] FLAG: --storage-driver-password="root" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173585 5136 flags.go:64] FLAG: --storage-driver-secure="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173594 5136 flags.go:64] FLAG: --storage-driver-table="stats" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173604 5136 flags.go:64] FLAG: --storage-driver-user="root" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173613 5136 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173623 5136 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173632 5136 flags.go:64] FLAG: --system-cgroups="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173640 5136 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173691 5136 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173702 5136 flags.go:64] FLAG: --tls-cert-file="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173711 5136 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173722 5136 flags.go:64] FLAG: --tls-min-version="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173730 5136 flags.go:64] FLAG: --tls-private-key-file="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173739 5136 flags.go:64] FLAG: --topology-manager-policy="none" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173748 5136 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173757 5136 flags.go:64] FLAG: --topology-manager-scope="container" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173765 5136 flags.go:64] FLAG: --v="2" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173777 5136 flags.go:64] FLAG: --version="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173790 5136 flags.go:64] FLAG: --vmodule="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173803 5136 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173846 5136 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174078 5136 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174094 5136 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174105 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174116 5136 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174127 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174137 5136 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174148 5136 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174156 5136 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174166 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174176 5136 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174186 5136 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174194 5136 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174201 5136 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174209 5136 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174217 5136 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174225 5136 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174268 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174276 5136 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174284 5136 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174293 5136 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174301 5136 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174309 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174316 5136 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174324 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174332 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174339 5136 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174347 5136 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174357 5136 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174367 5136 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174375 5136 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174384 5136 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174391 5136 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174399 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174407 5136 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174415 5136 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174423 5136 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174431 5136 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174439 5136 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174451 5136 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174461 5136 feature_gate.go:330] unrecognized feature gate: Example Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174469 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174477 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174484 5136 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174492 5136 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174499 5136 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174507 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174515 5136 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174525 5136 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174534 5136 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174542 5136 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174551 5136 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174560 5136 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174570 5136 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174582 5136 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174592 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174601 5136 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174611 5136 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174622 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174632 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174642 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174650 5136 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174658 5136 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174666 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174674 5136 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174681 5136 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174691 5136 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174701 5136 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174709 5136 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174717 5136 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174725 5136 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174733 5136 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.174747 5136 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.186055 5136 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.186105 5136 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186289 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186311 5136 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186321 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186331 5136 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186341 5136 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186350 5136 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186360 5136 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186369 5136 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186383 5136 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186397 5136 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186409 5136 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186420 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186430 5136 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186440 5136 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186450 5136 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186459 5136 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186469 5136 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186478 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186487 5136 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186497 5136 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186506 5136 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186516 5136 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186525 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186535 5136 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186544 5136 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186554 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186563 5136 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186573 5136 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186582 5136 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186593 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186603 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186613 5136 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186625 5136 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186636 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186649 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186660 5136 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186670 5136 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186680 5136 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186690 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186699 5136 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186709 5136 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186719 5136 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186731 5136 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186744 5136 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186945 5136 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186957 5136 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186968 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186978 5136 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186988 5136 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186996 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187007 5136 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187017 5136 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187028 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187037 5136 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187089 5136 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187105 5136 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187116 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187125 5136 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187135 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187144 5136 feature_gate.go:330] unrecognized feature gate: Example Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187153 5136 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187164 5136 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187174 5136 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187185 5136 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187195 5136 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187206 5136 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187217 5136 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187227 5136 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187237 5136 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187247 5136 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187259 5136 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.187276 5136 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187546 5136 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187565 5136 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187576 5136 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187586 5136 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187595 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187604 5136 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187613 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187623 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187633 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187645 5136 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187655 5136 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187668 5136 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187682 5136 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187692 5136 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187703 5136 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187713 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187723 5136 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187733 5136 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187743 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187755 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187764 5136 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187774 5136 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187784 5136 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187794 5136 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187805 5136 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187847 5136 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187857 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187867 5136 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187876 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187886 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187895 5136 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187908 5136 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187921 5136 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187931 5136 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187941 5136 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187951 5136 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187959 5136 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187967 5136 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187975 5136 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187984 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187994 5136 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188003 5136 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188012 5136 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188020 5136 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188030 5136 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188043 5136 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188053 5136 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188062 5136 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188071 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188080 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188090 5136 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188100 5136 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188110 5136 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188120 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188129 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188142 5136 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188153 5136 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188163 5136 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188175 5136 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188187 5136 feature_gate.go:330] unrecognized feature gate: Example Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188199 5136 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188210 5136 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188220 5136 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188231 5136 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188241 5136 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188251 5136 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188264 5136 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188273 5136 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188283 5136 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188296 5136 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188310 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.188326 5136 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.188716 5136 server.go:940] "Client rotation is on, will bootstrap in background" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.194071 5136 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.198952 5136 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.199099 5136 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.200884 5136 server.go:997] "Starting client certificate rotation" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.200923 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.201098 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.228809 5136 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.230656 5136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.231950 5136 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.251887 5136 log.go:25] "Validated CRI v1 runtime API" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.289686 5136 log.go:25] "Validated CRI v1 image API" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.291724 5136 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.297496 5136 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-20-06-40-52-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.297528 5136 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.320848 5136 manager.go:217] Machine: {Timestamp:2026-03-20 06:49:28.316579548 +0000 UTC m=+0.575890719 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2 BootID:35df81f9-549e-4466-8b52-0d5376d2ac8e Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9e:e9:df Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9e:e9:df Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:25:03:3b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d2:45:a7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1b:ff:08 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6d:05:a3 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:6b:a6:cf Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:89:18:28 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:48:45:c2:d8:3a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2a:cf:a3:17:2b:68 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.321122 5136 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.321289 5136 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.322523 5136 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.322742 5136 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.322787 5136 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323051 5136 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323065 5136 container_manager_linux.go:303] "Creating device plugin manager" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323504 5136 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323535 5136 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323733 5136 state_mem.go:36] "Initialized new in-memory state store" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323907 5136 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.327365 5136 kubelet.go:418] "Attempting to sync node with API server" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.327388 5136 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.327409 5136 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.327422 5136 kubelet.go:324] "Adding apiserver pod source" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.327434 5136 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.331683 5136 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.333017 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.333093 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.333160 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.333256 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.334310 5136 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.336883 5136 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338106 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338134 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338144 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338154 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338170 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338179 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338189 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338206 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338218 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338227 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338240 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338249 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.339041 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.339523 5136 server.go:1280] "Started kubelet" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.340551 5136 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.340571 5136 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.340977 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.341123 5136 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 06:49:28 crc systemd[1]: Started Kubernetes Kubelet. Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.342652 5136 server.go:460] "Adding debug handlers to kubelet server" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.343193 5136 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.343298 5136 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.343733 5136 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.343745 5136 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.343854 5136 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.343872 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.344356 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.344426 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.344480 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.344855 5136 factory.go:55] Registering systemd factory Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.344870 5136 factory.go:221] Registration of the systemd container factory successfully Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.345183 5136 factory.go:153] Registering CRI-O factory Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.345194 5136 factory.go:221] Registration of the crio container factory successfully Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.345267 5136 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.345289 5136 factory.go:103] Registering Raw factory Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.345305 5136 manager.go:1196] Started watching for new ooms in manager Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.346782 5136 manager.go:319] Starting recovery of all containers Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.347380 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362649 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362717 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362736 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362752 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362767 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362783 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362798 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362832 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362850 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362870 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362885 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362899 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362914 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362934 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362949 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362967 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362985 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363000 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363014 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363060 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363076 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363090 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363104 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363117 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363130 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363146 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363186 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363202 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363215 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363226 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363288 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363318 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363351 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363365 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365046 5136 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365102 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365123 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365139 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365156 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365169 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365183 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365197 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365209 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365225 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365241 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365256 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365270 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365283 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365298 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365313 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365325 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365339 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365352 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365376 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365392 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365407 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365423 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365438 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365451 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365464 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365475 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365488 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365501 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365515 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365529 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365542 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365554 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365568 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365579 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365592 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365605 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365616 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365632 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365644 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365656 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365669 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365682 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365694 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365706 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365719 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365734 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365746 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365758 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365772 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365785 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365798 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365831 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365845 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365857 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365870 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365882 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365896 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365908 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365922 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365940 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365952 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365965 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365978 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365996 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366009 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366022 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366039 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366068 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366086 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366105 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366126 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366145 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366165 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366185 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366202 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366216 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366230 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366246 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366261 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366310 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366324 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366336 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366349 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366361 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366372 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366386 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366398 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366413 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366439 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366451 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366464 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366477 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366489 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366503 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366516 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366529 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366541 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366552 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366566 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366578 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366590 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366604 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366589 5136 manager.go:324] Recovery completed Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366622 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367229 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367273 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367298 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367320 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367339 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367358 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367376 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367395 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367414 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367434 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367453 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367472 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367491 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367509 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367526 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367546 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367566 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367586 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367609 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367694 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367713 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367732 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367751 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367771 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367789 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367808 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367859 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367879 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367896 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367913 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367928 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367941 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367954 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367969 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367983 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367997 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368009 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368024 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368042 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368061 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368078 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368098 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368118 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368137 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368153 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368170 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368188 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368205 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368222 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368239 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368259 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368279 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368298 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368314 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368330 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368347 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368365 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368384 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368403 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368420 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368439 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368454 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368467 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368483 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368502 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368522 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368540 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368557 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368574 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368591 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368608 5136 reconstruct.go:97] "Volume reconstruction finished" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368619 5136 reconciler.go:26] "Reconciler: start to sync state" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.376196 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.377663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.377698 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.377711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.378662 5136 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.378684 5136 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.378703 5136 state_mem.go:36] "Initialized new in-memory state store" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.391190 5136 policy_none.go:49] "None policy: Start" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.392210 5136 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.392239 5136 state_mem.go:35] "Initializing new in-memory state store" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.393413 5136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.395279 5136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.395351 5136 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.395400 5136 kubelet.go:2335] "Starting kubelet main sync loop" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.395502 5136 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.397479 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.397528 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.444270 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.464399 5136 manager.go:334] "Starting Device Plugin manager" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.464452 5136 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.464467 5136 server.go:79] "Starting device plugin registration server" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.464943 5136 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.464961 5136 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.465229 5136 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.465317 5136 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.465329 5136 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.476708 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.495977 5136 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.496064 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.499580 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.499627 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.499638 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.499839 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.501002 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.501047 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502087 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502118 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502130 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502295 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502769 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502803 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502853 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502874 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503348 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503358 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503479 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503778 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506290 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506371 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506294 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506394 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506404 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506381 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506373 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506553 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506637 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506802 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506843 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507303 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507316 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507403 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507425 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507434 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507497 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507520 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.508375 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.508404 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.508415 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.544906 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.566010 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.566942 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.566970 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.566979 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.567002 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.567317 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.570984 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571018 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571044 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571067 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571087 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571109 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571128 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571147 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571168 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571214 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571238 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571274 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571302 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571337 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672050 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672106 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672127 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672149 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672169 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672194 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672209 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672222 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672237 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672252 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672267 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672275 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672325 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672346 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672361 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672397 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672449 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672451 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672467 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672477 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672488 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672511 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672529 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672543 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672604 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672626 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672643 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672694 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.767486 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.769273 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.769322 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.769357 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.769384 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.769993 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.836344 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.842219 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.870625 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.876106 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0a193834d6eba9c791a237b787f15bba0b736e0a6ee749057b72eea9884ee605 WatchSource:0}: Error finding container 0a193834d6eba9c791a237b787f15bba0b736e0a6ee749057b72eea9884ee605: Status 404 returned error can't find the container with id 0a193834d6eba9c791a237b787f15bba0b736e0a6ee749057b72eea9884ee605 Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.880617 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-40a72a96dbb90b4ee768105313c2e8ab551127e383ab23c934ef1b2b3edcf3c6 WatchSource:0}: Error finding container 40a72a96dbb90b4ee768105313c2e8ab551127e383ab23c934ef1b2b3edcf3c6: Status 404 returned error can't find the container with id 40a72a96dbb90b4ee768105313c2e8ab551127e383ab23c934ef1b2b3edcf3c6 Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.892799 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-517f26af0658ddb977bdc7e5b325c63e73d20f866b2d48b8b544045f0bf011a8 WatchSource:0}: Error finding container 517f26af0658ddb977bdc7e5b325c63e73d20f866b2d48b8b544045f0bf011a8: Status 404 returned error can't find the container with id 517f26af0658ddb977bdc7e5b325c63e73d20f866b2d48b8b544045f0bf011a8 Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.896052 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.904053 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.908400 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-969f650037db2a4ec2162d83b339cfdaae270274c907256ee9c5d530ad3bd25f WatchSource:0}: Error finding container 969f650037db2a4ec2162d83b339cfdaae270274c907256ee9c5d530ad3bd25f: Status 404 returned error can't find the container with id 969f650037db2a4ec2162d83b339cfdaae270274c907256ee9c5d530ad3bd25f Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.917519 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4a15e640fc6ee8591ba51b26d10b574565ecf686a861a0c9c439f8b0508ad591 WatchSource:0}: Error finding container 4a15e640fc6ee8591ba51b26d10b574565ecf686a861a0c9c439f8b0508ad591: Status 404 returned error can't find the container with id 4a15e640fc6ee8591ba51b26d10b574565ecf686a861a0c9c439f8b0508ad591 Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.945861 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Mar 20 06:49:29 crc kubenswrapper[5136]: W0320 06:49:29.147315 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.147405 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.170621 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.171716 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.171760 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.171772 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.171799 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.172310 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.342287 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.400262 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4a15e640fc6ee8591ba51b26d10b574565ecf686a861a0c9c439f8b0508ad591"} Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.401379 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"969f650037db2a4ec2162d83b339cfdaae270274c907256ee9c5d530ad3bd25f"} Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.402551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"517f26af0658ddb977bdc7e5b325c63e73d20f866b2d48b8b544045f0bf011a8"} Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.403702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"40a72a96dbb90b4ee768105313c2e8ab551127e383ab23c934ef1b2b3edcf3c6"} Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.404804 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a193834d6eba9c791a237b787f15bba0b736e0a6ee749057b72eea9884ee605"} Mar 20 06:49:29 crc kubenswrapper[5136]: W0320 06:49:29.427445 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.427516 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.746972 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Mar 20 06:49:29 crc kubenswrapper[5136]: W0320 06:49:29.819525 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.819609 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.876020 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:49:29 crc kubenswrapper[5136]: W0320 06:49:29.890195 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.890295 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.972980 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.974594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.974695 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.974706 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.974728 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.975193 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.281758 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:49:30 crc kubenswrapper[5136]: E0320 06:49:30.283161 5136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.342483 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.411996 5136 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab" exitCode=0 Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.412134 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.412173 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.413502 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.413599 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.413634 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.414534 5136 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0" exitCode=0 Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.414651 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.414896 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417416 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417454 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417466 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417715 5136 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636" exitCode=0 Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417845 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417971 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423574 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0" exitCode=0 Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423624 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423687 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423753 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423923 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.427575 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.427602 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.427614 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.429168 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.430253 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.430309 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.430333 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.431296 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.431330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.431346 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.342529 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:31 crc kubenswrapper[5136]: E0320 06:49:31.347846 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.444095 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.444208 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.448161 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.448197 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.448212 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.451066 5136 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72" exitCode=0 Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.451259 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.451235 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.453023 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.453068 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.453086 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.455336 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.455372 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc"} Mar 20 06:49:31 crc kubenswrapper[5136]: W0320 06:49:31.461503 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:31 crc kubenswrapper[5136]: E0320 06:49:31.461600 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.463116 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.463533 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.463566 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.472239 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.472322 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.472977 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.473068 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.475209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.475252 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.475264 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.478208 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.478354 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.478441 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.478551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080"} Mar 20 06:49:31 crc kubenswrapper[5136]: W0320 06:49:31.503357 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:31 crc kubenswrapper[5136]: E0320 06:49:31.503904 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.571338 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.575897 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.578100 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.578160 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.578181 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.578259 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:31 crc kubenswrapper[5136]: E0320 06:49:31.579018 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Mar 20 06:49:31 crc kubenswrapper[5136]: W0320 06:49:31.960286 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:31 crc kubenswrapper[5136]: E0320 06:49:31.960428 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.284034 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.291563 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.483132 5136 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19" exitCode=0 Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.483244 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19"} Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.483377 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.484692 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.484728 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.484740 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489271 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8d93960df6dce2dfdd966016501b971d075f10507bb052d8c270bbcd775f7b55"} Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489377 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489400 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489426 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489389 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489431 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.490975 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491006 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491017 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491131 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491195 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491232 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491257 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491255 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491308 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.492002 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.492037 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.492048 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.698212 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.312833 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497740 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce"} Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497849 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497858 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131"} Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497894 5136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497983 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497866 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497895 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2"} Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.498285 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc"} Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499076 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499105 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499115 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499631 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499682 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499695 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499709 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499745 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499768 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.914295 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.503203 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.503337 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.503388 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.503183 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15"} Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.504611 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.504640 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.504651 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.504988 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.505042 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.505066 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.505251 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.505374 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.505402 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.568671 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.780186 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.782039 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.782109 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.782131 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.782165 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.506422 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.506422 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508109 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508110 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508171 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508184 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508151 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508213 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.630673 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.698396 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.698513 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.727641 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.727970 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.729454 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.729502 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.729520 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.902461 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.486100 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.510016 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.510078 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511517 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511575 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511593 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511845 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511888 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511908 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:37 crc kubenswrapper[5136]: I0320 06:49:37.512762 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:37 crc kubenswrapper[5136]: I0320 06:49:37.514415 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:37 crc kubenswrapper[5136]: I0320 06:49:37.514473 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:37 crc kubenswrapper[5136]: I0320 06:49:37.514491 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:38 crc kubenswrapper[5136]: E0320 06:49:38.477222 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:49:42 crc kubenswrapper[5136]: I0320 06:49:42.343314 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.401940 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:49:42 crc kubenswrapper[5136]: W0320 06:49:42.403666 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.403924 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:42 crc kubenswrapper[5136]: W0320 06:49:42.408276 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.408335 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:42 crc kubenswrapper[5136]: W0320 06:49:42.409506 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.409566 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:42 crc kubenswrapper[5136]: W0320 06:49:42.411218 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.411276 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.412502 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.412761 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.415350 5136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:42 crc kubenswrapper[5136]: I0320 06:49:42.421294 5136 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 06:49:42 crc kubenswrapper[5136]: I0320 06:49:42.421390 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 20 06:49:42 crc kubenswrapper[5136]: I0320 06:49:42.429645 5136 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 06:49:42 crc kubenswrapper[5136]: I0320 06:49:42.429703 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.322495 5136 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]log ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]etcd ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/generic-apiserver-start-informers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-filter ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-apiextensions-informers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-apiextensions-controllers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/crd-informer-synced ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-system-namespaces-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 20 06:49:43 crc kubenswrapper[5136]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/bootstrap-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-kube-aggregator-informers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-registration-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-discovery-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]autoregister-completion ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-openapi-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: livez check failed Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.322569 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.344918 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:43Z is after 2026-02-23T05:33:13Z Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.530716 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.532884 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d93960df6dce2dfdd966016501b971d075f10507bb052d8c270bbcd775f7b55" exitCode=255 Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.532934 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8d93960df6dce2dfdd966016501b971d075f10507bb052d8c270bbcd775f7b55"} Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.533138 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.534316 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.534357 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.534395 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.535246 5136 scope.go:117] "RemoveContainer" containerID="8d93960df6dce2dfdd966016501b971d075f10507bb052d8c270bbcd775f7b55" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.345954 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:44Z is after 2026-02-23T05:33:13Z Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.536871 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.537269 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.539031 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" exitCode=255 Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.539079 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d"} Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.539141 5136 scope.go:117] "RemoveContainer" containerID="8d93960df6dce2dfdd966016501b971d075f10507bb052d8c270bbcd775f7b55" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.539271 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.540072 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.540113 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.540129 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.540780 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:49:44 crc kubenswrapper[5136]: E0320 06:49:44.541007 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:49:45 crc kubenswrapper[5136]: I0320 06:49:45.346175 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:45Z is after 2026-02-23T05:33:13Z Mar 20 06:49:45 crc kubenswrapper[5136]: I0320 06:49:45.545803 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 20 06:49:45 crc kubenswrapper[5136]: I0320 06:49:45.699093 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:49:45 crc kubenswrapper[5136]: I0320 06:49:45.699156 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.350624 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:46Z is after 2026-02-23T05:33:13Z Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.525485 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.525638 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.526860 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.526895 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.526905 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.539778 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.551140 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.552253 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.552300 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.552312 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:47 crc kubenswrapper[5136]: I0320 06:49:47.344369 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:47Z is after 2026-02-23T05:33:13Z Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.320797 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.321079 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.322907 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.322952 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.322965 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.323566 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.323727 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.324902 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.346394 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:48Z is after 2026-02-23T05:33:13Z Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.477367 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:49:48 crc kubenswrapper[5136]: W0320 06:49:48.517329 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:48Z is after 2026-02-23T05:33:13Z Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.517437 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:48Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.556503 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.557851 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.557890 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.557900 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.558457 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.558607 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.802398 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.804016 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.804096 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.804107 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.804131 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.808639 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:48Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.816106 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:48Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.939996 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.940177 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.941514 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.941558 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.941567 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:49 crc kubenswrapper[5136]: W0320 06:49:49.065585 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:49Z is after 2026-02-23T05:33:13Z Mar 20 06:49:49 crc kubenswrapper[5136]: E0320 06:49:49.065691 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:49 crc kubenswrapper[5136]: I0320 06:49:49.343989 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:49Z is after 2026-02-23T05:33:13Z Mar 20 06:49:49 crc kubenswrapper[5136]: W0320 06:49:49.661980 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:49Z is after 2026-02-23T05:33:13Z Mar 20 06:49:49 crc kubenswrapper[5136]: E0320 06:49:49.662073 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.092235 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.092400 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.093893 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.093963 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.093982 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.094756 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:49:50 crc kubenswrapper[5136]: E0320 06:49:50.095160 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.346612 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:50Z is after 2026-02-23T05:33:13Z Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.847889 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:49:50 crc kubenswrapper[5136]: E0320 06:49:50.853467 5136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:51 crc kubenswrapper[5136]: I0320 06:49:51.349190 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:51Z is after 2026-02-23T05:33:13Z Mar 20 06:49:52 crc kubenswrapper[5136]: I0320 06:49:52.345283 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:52Z is after 2026-02-23T05:33:13Z Mar 20 06:49:52 crc kubenswrapper[5136]: E0320 06:49:52.417844 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:52Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.346687 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:53Z is after 2026-02-23T05:33:13Z Mar 20 06:49:53 crc kubenswrapper[5136]: W0320 06:49:53.585870 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:53Z is after 2026-02-23T05:33:13Z Mar 20 06:49:53 crc kubenswrapper[5136]: E0320 06:49:53.586001 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:53Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.914792 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.915082 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.916866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.916915 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.916933 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.917777 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:49:53 crc kubenswrapper[5136]: E0320 06:49:53.918186 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:49:54 crc kubenswrapper[5136]: I0320 06:49:54.346958 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:54Z is after 2026-02-23T05:33:13Z Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.344543 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:55Z is after 2026-02-23T05:33:13Z Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.699263 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.699453 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.699557 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.699767 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.701794 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.701871 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.701889 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.702659 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.702952 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835" gracePeriod=30 Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.809301 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.811757 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.812007 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.812170 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.812378 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:55 crc kubenswrapper[5136]: E0320 06:49:55.816072 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:55Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:49:55 crc kubenswrapper[5136]: E0320 06:49:55.822868 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:55Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.345716 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:56Z is after 2026-02-23T05:33:13Z Mar 20 06:49:56 crc kubenswrapper[5136]: W0320 06:49:56.498600 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:56Z is after 2026-02-23T05:33:13Z Mar 20 06:49:56 crc kubenswrapper[5136]: E0320 06:49:56.498726 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.580439 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.581151 5136 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835" exitCode=255 Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.581203 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835"} Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.581269 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde"} Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.581397 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.582422 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.582478 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.582497 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:57 crc kubenswrapper[5136]: I0320 06:49:57.345163 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:57Z is after 2026-02-23T05:33:13Z Mar 20 06:49:57 crc kubenswrapper[5136]: I0320 06:49:57.583502 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:57 crc kubenswrapper[5136]: I0320 06:49:57.584514 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:57 crc kubenswrapper[5136]: I0320 06:49:57.584545 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:57 crc kubenswrapper[5136]: I0320 06:49:57.584554 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:58 crc kubenswrapper[5136]: I0320 06:49:58.344528 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:58Z is after 2026-02-23T05:33:13Z Mar 20 06:49:58 crc kubenswrapper[5136]: E0320 06:49:58.477461 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:49:59 crc kubenswrapper[5136]: I0320 06:49:59.348073 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:59Z is after 2026-02-23T05:33:13Z Mar 20 06:50:00 crc kubenswrapper[5136]: I0320 06:50:00.346398 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:00Z is after 2026-02-23T05:33:13Z Mar 20 06:50:01 crc kubenswrapper[5136]: I0320 06:50:01.346645 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:01Z is after 2026-02-23T05:33:13Z Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.346992 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:02Z is after 2026-02-23T05:33:13Z Mar 20 06:50:02 crc kubenswrapper[5136]: E0320 06:50:02.425560 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:02Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.699104 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.699296 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.700884 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.700936 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.700958 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.816197 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.817955 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.818062 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.818088 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.818156 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:02 crc kubenswrapper[5136]: E0320 06:50:02.823407 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:02Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:50:02 crc kubenswrapper[5136]: E0320 06:50:02.828141 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:02Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:50:03 crc kubenswrapper[5136]: I0320 06:50:03.346592 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:03Z is after 2026-02-23T05:33:13Z Mar 20 06:50:04 crc kubenswrapper[5136]: I0320 06:50:04.346133 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:04Z is after 2026-02-23T05:33:13Z Mar 20 06:50:04 crc kubenswrapper[5136]: W0320 06:50:04.951777 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:04Z is after 2026-02-23T05:33:13Z Mar 20 06:50:04 crc kubenswrapper[5136]: E0320 06:50:04.951937 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.347072 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:05Z is after 2026-02-23T05:33:13Z Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.699204 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.699277 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.728531 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.728734 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.730231 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.730298 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.730321 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.346292 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:06Z is after 2026-02-23T05:33:13Z Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.396401 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.397846 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.397887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.397905 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.398494 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.346147 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:07Z is after 2026-02-23T05:33:13Z Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.616051 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.616567 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.618522 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" exitCode=255 Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.618586 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c"} Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.618655 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.618981 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.620508 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.620531 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.620543 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.621055 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:07 crc kubenswrapper[5136]: E0320 06:50:07.621223 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.757070 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:50:07 crc kubenswrapper[5136]: E0320 06:50:07.761568 5136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:07Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:50:07 crc kubenswrapper[5136]: E0320 06:50:07.762779 5136 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 20 06:50:08 crc kubenswrapper[5136]: I0320 06:50:08.345898 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:08Z is after 2026-02-23T05:33:13Z Mar 20 06:50:08 crc kubenswrapper[5136]: E0320 06:50:08.477976 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:50:08 crc kubenswrapper[5136]: I0320 06:50:08.625121 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.344378 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:09Z is after 2026-02-23T05:33:13Z Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.823786 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.825331 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.825395 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.825415 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.825449 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:09 crc kubenswrapper[5136]: E0320 06:50:09.830653 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:09Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:50:09 crc kubenswrapper[5136]: E0320 06:50:09.834386 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:09Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.092927 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.093079 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.094196 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.094224 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.094233 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.094746 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:10 crc kubenswrapper[5136]: E0320 06:50:10.094942 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.344519 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:10Z is after 2026-02-23T05:33:13Z Mar 20 06:50:10 crc kubenswrapper[5136]: W0320 06:50:10.596165 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:10Z is after 2026-02-23T05:33:13Z Mar 20 06:50:10 crc kubenswrapper[5136]: E0320 06:50:10.596273 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:50:11 crc kubenswrapper[5136]: I0320 06:50:11.347061 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:11Z is after 2026-02-23T05:33:13Z Mar 20 06:50:12 crc kubenswrapper[5136]: I0320 06:50:12.345779 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:12Z is after 2026-02-23T05:33:13Z Mar 20 06:50:12 crc kubenswrapper[5136]: E0320 06:50:12.431503 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:12Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.348313 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:13Z is after 2026-02-23T05:33:13Z Mar 20 06:50:13 crc kubenswrapper[5136]: W0320 06:50:13.810226 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:13Z is after 2026-02-23T05:33:13Z Mar 20 06:50:13 crc kubenswrapper[5136]: E0320 06:50:13.810304 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.915162 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.915646 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.917431 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.917475 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.917487 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.919292 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:13 crc kubenswrapper[5136]: E0320 06:50:13.919919 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:14 crc kubenswrapper[5136]: I0320 06:50:14.345787 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:14Z is after 2026-02-23T05:33:13Z Mar 20 06:50:15 crc kubenswrapper[5136]: I0320 06:50:15.348257 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:15Z is after 2026-02-23T05:33:13Z Mar 20 06:50:15 crc kubenswrapper[5136]: I0320 06:50:15.699530 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:50:15 crc kubenswrapper[5136]: I0320 06:50:15.699666 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.349148 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:16Z is after 2026-02-23T05:33:13Z Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.831733 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.833091 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.833134 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.833145 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.833170 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:16 crc kubenswrapper[5136]: E0320 06:50:16.836166 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:16Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:50:16 crc kubenswrapper[5136]: E0320 06:50:16.838307 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:16Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:50:17 crc kubenswrapper[5136]: W0320 06:50:17.159957 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:17Z is after 2026-02-23T05:33:13Z Mar 20 06:50:17 crc kubenswrapper[5136]: E0320 06:50:17.160356 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:50:17 crc kubenswrapper[5136]: I0320 06:50:17.347380 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:17Z is after 2026-02-23T05:33:13Z Mar 20 06:50:18 crc kubenswrapper[5136]: I0320 06:50:18.346590 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:18Z is after 2026-02-23T05:33:13Z Mar 20 06:50:18 crc kubenswrapper[5136]: E0320 06:50:18.478081 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:50:19 crc kubenswrapper[5136]: I0320 06:50:19.344604 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:19Z is after 2026-02-23T05:33:13Z Mar 20 06:50:20 crc kubenswrapper[5136]: I0320 06:50:20.344124 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:20Z is after 2026-02-23T05:33:13Z Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.343964 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:21Z is after 2026-02-23T05:33:13Z Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.574766 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.574946 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.575913 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.575951 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.575964 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:22 crc kubenswrapper[5136]: I0320 06:50:22.345001 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:22Z is after 2026-02-23T05:33:13Z Mar 20 06:50:22 crc kubenswrapper[5136]: E0320 06:50:22.435489 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:22Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.346410 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:23Z is after 2026-02-23T05:33:13Z Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.836752 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.838324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.838406 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.838423 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.838461 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:23 crc kubenswrapper[5136]: E0320 06:50:23.841941 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:23Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:50:23 crc kubenswrapper[5136]: E0320 06:50:23.843848 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:23Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:50:24 crc kubenswrapper[5136]: I0320 06:50:24.349915 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:24Z is after 2026-02-23T05:33:13Z Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.347657 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.698727 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.698829 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.698897 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.699074 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.700360 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.700430 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.700449 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.701208 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.701373 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde" gracePeriod=30 Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.349445 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.396494 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.397952 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.398022 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.398048 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.398904 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:26 crc kubenswrapper[5136]: E0320 06:50:26.399251 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.677176 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.678397 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.678856 5136 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde" exitCode=255 Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.678908 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde"} Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.678942 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9"} Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.678964 5136 scope.go:117] "RemoveContainer" containerID="e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.679140 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.680563 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.680599 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.680609 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:27 crc kubenswrapper[5136]: I0320 06:50:27.345742 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:27 crc kubenswrapper[5136]: I0320 06:50:27.685354 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 20 06:50:28 crc kubenswrapper[5136]: I0320 06:50:28.347877 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:28 crc kubenswrapper[5136]: E0320 06:50:28.478215 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:50:29 crc kubenswrapper[5136]: I0320 06:50:29.346244 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.349640 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.844604 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.846029 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.846072 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.846087 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.846118 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:30 crc kubenswrapper[5136]: E0320 06:50:30.850036 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 06:50:30 crc kubenswrapper[5136]: E0320 06:50:30.850088 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 06:50:31 crc kubenswrapper[5136]: I0320 06:50:31.346149 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.346619 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.441176 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.448034 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.452244 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.458302 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.463081 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7eea37d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.469010384 +0000 UTC m=+0.728321535,LastTimestamp:2026-03-20 06:49:28.469010384 +0000 UTC m=+0.728321535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.469073 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.499615694 +0000 UTC m=+0.758926845,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.472329 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.499634555 +0000 UTC m=+0.758945706,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.478297 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.499643976 +0000 UTC m=+0.758955137,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.483926 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.502105585 +0000 UTC m=+0.761416736,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.487271 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.502124576 +0000 UTC m=+0.761435727,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.492746 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.502135806 +0000 UTC m=+0.761446957,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.496667 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.502867196 +0000 UTC m=+0.762178347,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.499910 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.502882847 +0000 UTC m=+0.762193988,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.504343 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.502921508 +0000 UTC m=+0.762232659,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.505608 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.503343476 +0000 UTC m=+0.762654617,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.511726 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.503355356 +0000 UTC m=+0.762666507,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.515199 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.503363326 +0000 UTC m=+0.762674477,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.521023 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.506365488 +0000 UTC m=+0.765676639,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.531500 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.506378389 +0000 UTC m=+0.765689540,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.536013 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.506388039 +0000 UTC m=+0.765699190,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.540517 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.506401229 +0000 UTC m=+0.765712380,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.546144 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.50640972 +0000 UTC m=+0.765720871,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.551551 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.506453132 +0000 UTC m=+0.765764303,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.556006 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.506547245 +0000 UTC m=+0.765858396,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.560569 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.506558656 +0000 UTC m=+0.765869807,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.568358 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ee976c7828 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.88019972 +0000 UTC m=+1.139510871,LastTimestamp:2026-03-20 06:49:28.88019972 +0000 UTC m=+1.139510871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.575326 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79ee9799ea1a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.88317801 +0000 UTC m=+1.142489151,LastTimestamp:2026-03-20 06:49:28.88317801 +0000 UTC m=+1.142489151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.579927 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ee9855b16c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.895484268 +0000 UTC m=+1.154795419,LastTimestamp:2026-03-20 06:49:28.895484268 +0000 UTC m=+1.154795419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.584259 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ee993d1bef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.910650351 +0000 UTC m=+1.169961502,LastTimestamp:2026-03-20 06:49:28.910650351 +0000 UTC m=+1.169961502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.588575 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79ee99ebbe14 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.922095124 +0000 UTC m=+1.181406275,LastTimestamp:2026-03-20 06:49:28.922095124 +0000 UTC m=+1.181406275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.592860 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79eebf61d999 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.550592409 +0000 UTC m=+1.809903560,LastTimestamp:2026-03-20 06:49:29.550592409 +0000 UTC m=+1.809903560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.597123 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79eebf61d9ad openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.550592429 +0000 UTC m=+1.809903580,LastTimestamp:2026-03-20 06:49:29.550592429 +0000 UTC m=+1.809903580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.601037 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79eebf62740e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.55063195 +0000 UTC m=+1.809943101,LastTimestamp:2026-03-20 06:49:29.55063195 +0000 UTC m=+1.809943101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.606036 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eebf6299ad openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.550641581 +0000 UTC m=+1.809952732,LastTimestamp:2026-03-20 06:49:29.550641581 +0000 UTC m=+1.809952732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.610657 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79eebf62d201 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.550656001 +0000 UTC m=+1.809967152,LastTimestamp:2026-03-20 06:49:29.550656001 +0000 UTC m=+1.809967152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.618008 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79eec0627c4f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.567411279 +0000 UTC m=+1.826722430,LastTimestamp:2026-03-20 06:49:29.567411279 +0000 UTC m=+1.826722430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.623607 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79eec07df78d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.569212301 +0000 UTC m=+1.828523452,LastTimestamp:2026-03-20 06:49:29.569212301 +0000 UTC m=+1.828523452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.628500 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79eec07e8c59 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.569250393 +0000 UTC m=+1.828561544,LastTimestamp:2026-03-20 06:49:29.569250393 +0000 UTC m=+1.828561544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.633046 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79eec080d3cf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.569399759 +0000 UTC m=+1.828710910,LastTimestamp:2026-03-20 06:49:29.569399759 +0000 UTC m=+1.828710910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.636247 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eec0843532 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.569621298 +0000 UTC m=+1.828932449,LastTimestamp:2026-03-20 06:49:29.569621298 +0000 UTC m=+1.828932449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.640055 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eec091c540 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.570510144 +0000 UTC m=+1.829821295,LastTimestamp:2026-03-20 06:49:29.570510144 +0000 UTC m=+1.829821295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.643392 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eed175c3f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.853887477 +0000 UTC m=+2.113198638,LastTimestamp:2026-03-20 06:49:29.853887477 +0000 UTC m=+2.113198638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.649380 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eed247922d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.867637293 +0000 UTC m=+2.126948454,LastTimestamp:2026-03-20 06:49:29.867637293 +0000 UTC m=+2.126948454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.654246 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eed258c823 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.868765219 +0000 UTC m=+2.128076370,LastTimestamp:2026-03-20 06:49:29.868765219 +0000 UTC m=+2.128076370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.657757 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eee2523616 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.13677007 +0000 UTC m=+2.396081231,LastTimestamp:2026-03-20 06:49:30.13677007 +0000 UTC m=+2.396081231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.662042 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eee4675f8e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.171711374 +0000 UTC m=+2.431022525,LastTimestamp:2026-03-20 06:49:30.171711374 +0000 UTC m=+2.431022525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.667745 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eee47d39f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.173143541 +0000 UTC m=+2.432454692,LastTimestamp:2026-03-20 06:49:30.173143541 +0000 UTC m=+2.432454692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.672759 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79eef2ff37ef openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.416543727 +0000 UTC m=+2.675854908,LastTimestamp:2026-03-20 06:49:30.416543727 +0000 UTC m=+2.675854908,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.676389 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79eef34d423f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.421658175 +0000 UTC m=+2.680969336,LastTimestamp:2026-03-20 06:49:30.421658175 +0000 UTC m=+2.680969336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.680779 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79eef3add8ac openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.42798814 +0000 UTC m=+2.687299331,LastTimestamp:2026-03-20 06:49:30.42798814 +0000 UTC m=+2.687299331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.684110 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79eef3bd95b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.429019572 +0000 UTC m=+2.688330733,LastTimestamp:2026-03-20 06:49:30.429019572 +0000 UTC m=+2.688330733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.687603 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eef7f860d3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.499981523 +0000 UTC m=+2.759292684,LastTimestamp:2026-03-20 06:49:30.499981523 +0000 UTC m=+2.759292684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.691853 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eefb3336a2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.554168994 +0000 UTC m=+2.813480185,LastTimestamp:2026-03-20 06:49:30.554168994 +0000 UTC m=+2.813480185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.695047 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef01135582 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.652743042 +0000 UTC m=+2.912054213,LastTimestamp:2026-03-20 06:49:30.652743042 +0000 UTC m=+2.912054213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.698405 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.698564 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.700556 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.700591 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.700600 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.701018 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79ef01a7df4a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.662477642 +0000 UTC m=+2.921788803,LastTimestamp:2026-03-20 06:49:30.662477642 +0000 UTC m=+2.921788803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.704872 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef02238c5f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.670582879 +0000 UTC m=+2.929894030,LastTimestamp:2026-03-20 06:49:30.670582879 +0000 UTC m=+2.929894030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.709902 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79ef02fb8fe1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.684739553 +0000 UTC m=+2.944050704,LastTimestamp:2026-03-20 06:49:30.684739553 +0000 UTC m=+2.944050704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.711954 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef03932db6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.694675894 +0000 UTC m=+2.953987035,LastTimestamp:2026-03-20 06:49:30.694675894 +0000 UTC m=+2.953987035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.714273 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef03958942 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.694830402 +0000 UTC m=+2.954141553,LastTimestamp:2026-03-20 06:49:30.694830402 +0000 UTC m=+2.954141553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.718710 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef03b48d8f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.696863119 +0000 UTC m=+2.956174270,LastTimestamp:2026-03-20 06:49:30.696863119 +0000 UTC m=+2.956174270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.719653 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef0826b76f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.771453807 +0000 UTC m=+3.030764958,LastTimestamp:2026-03-20 06:49:30.771453807 +0000 UTC m=+3.030764958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.725874 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef091ef484 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.787722372 +0000 UTC m=+3.047033523,LastTimestamp:2026-03-20 06:49:30.787722372 +0000 UTC m=+3.047033523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.730069 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef093140c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.788921544 +0000 UTC m=+3.048232695,LastTimestamp:2026-03-20 06:49:30.788921544 +0000 UTC m=+3.048232695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.734431 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef0e625861 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.876024929 +0000 UTC m=+3.135336120,LastTimestamp:2026-03-20 06:49:30.876024929 +0000 UTC m=+3.135336120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.740494 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef0f0d0ef3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.887212787 +0000 UTC m=+3.146523958,LastTimestamp:2026-03-20 06:49:30.887212787 +0000 UTC m=+3.146523958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.747181 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef0f27085d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.888915037 +0000 UTC m=+3.148226188,LastTimestamp:2026-03-20 06:49:30.888915037 +0000 UTC m=+3.148226188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.751264 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef1354b2b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.959016629 +0000 UTC m=+3.218327780,LastTimestamp:2026-03-20 06:49:30.959016629 +0000 UTC m=+3.218327780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.758575 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef143417c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.973657027 +0000 UTC m=+3.232968178,LastTimestamp:2026-03-20 06:49:30.973657027 +0000 UTC m=+3.232968178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.762647 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef144834f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.974975217 +0000 UTC m=+3.234286368,LastTimestamp:2026-03-20 06:49:30.974975217 +0000 UTC m=+3.234286368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.769108 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef1adcb515 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.085370645 +0000 UTC m=+3.344681816,LastTimestamp:2026-03-20 06:49:31.085370645 +0000 UTC m=+3.344681816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.776386 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef1c776671 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.112285809 +0000 UTC m=+3.371596980,LastTimestamp:2026-03-20 06:49:31.112285809 +0000 UTC m=+3.371596980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.782047 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef1e54a54c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.143562572 +0000 UTC m=+3.402873733,LastTimestamp:2026-03-20 06:49:31.143562572 +0000 UTC m=+3.402873733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.787691 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef1f385c1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.158486045 +0000 UTC m=+3.417797196,LastTimestamp:2026-03-20 06:49:31.158486045 +0000 UTC m=+3.417797196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.791580 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef1f52d4f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.160220917 +0000 UTC m=+3.419532068,LastTimestamp:2026-03-20 06:49:31.160220917 +0000 UTC m=+3.419532068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.795545 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef2cb02118 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.384439064 +0000 UTC m=+3.643750215,LastTimestamp:2026-03-20 06:49:31.384439064 +0000 UTC m=+3.643750215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.799632 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef2d69b59a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.396601242 +0000 UTC m=+3.655912413,LastTimestamp:2026-03-20 06:49:31.396601242 +0000 UTC m=+3.655912413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.803416 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef2d7f5733 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.398018867 +0000 UTC m=+3.657330028,LastTimestamp:2026-03-20 06:49:31.398018867 +0000 UTC m=+3.657330028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.809908 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef30f85829 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.456280617 +0000 UTC m=+3.715591788,LastTimestamp:2026-03-20 06:49:31.456280617 +0000 UTC m=+3.715591788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.815712 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef3b7fde92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.632934546 +0000 UTC m=+3.892245697,LastTimestamp:2026-03-20 06:49:31.632934546 +0000 UTC m=+3.892245697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.821667 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef3c1f0b8c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.643366284 +0000 UTC m=+3.902677435,LastTimestamp:2026-03-20 06:49:31.643366284 +0000 UTC m=+3.902677435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.825279 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef3fbdad47 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.704094023 +0000 UTC m=+3.963405174,LastTimestamp:2026-03-20 06:49:31.704094023 +0000 UTC m=+3.963405174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.830701 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef40fda367 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.725063015 +0000 UTC m=+3.984374166,LastTimestamp:2026-03-20 06:49:31.725063015 +0000 UTC m=+3.984374166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.834313 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef6e5abf3e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.48613971 +0000 UTC m=+4.745450901,LastTimestamp:2026-03-20 06:49:32.48613971 +0000 UTC m=+4.745450901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.839236 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef79eba285 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.680184453 +0000 UTC m=+4.939495614,LastTimestamp:2026-03-20 06:49:32.680184453 +0000 UTC m=+4.939495614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.843615 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef7adeae27 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.696112679 +0000 UTC m=+4.955423830,LastTimestamp:2026-03-20 06:49:32.696112679 +0000 UTC m=+4.955423830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.848249 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef7aee2059 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.697124953 +0000 UTC m=+4.956436104,LastTimestamp:2026-03-20 06:49:32.697124953 +0000 UTC m=+4.956436104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.852289 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef85fdb37e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.882695038 +0000 UTC m=+5.142006189,LastTimestamp:2026-03-20 06:49:32.882695038 +0000 UTC m=+5.142006189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.857217 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef86e121a8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.897599912 +0000 UTC m=+5.156911063,LastTimestamp:2026-03-20 06:49:32.897599912 +0000 UTC m=+5.156911063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.863006 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef86eef33a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.89850553 +0000 UTC m=+5.157816681,LastTimestamp:2026-03-20 06:49:32.89850553 +0000 UTC m=+5.157816681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.866326 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef9501c0f6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.13461887 +0000 UTC m=+5.393930031,LastTimestamp:2026-03-20 06:49:33.13461887 +0000 UTC m=+5.393930031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.870484 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef95e039fe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.149198846 +0000 UTC m=+5.408510007,LastTimestamp:2026-03-20 06:49:33.149198846 +0000 UTC m=+5.408510007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.874340 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef95f27e42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.15039597 +0000 UTC m=+5.409707131,LastTimestamp:2026-03-20 06:49:33.15039597 +0000 UTC m=+5.409707131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.880096 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79efa32ffaa5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.372529317 +0000 UTC m=+5.631840478,LastTimestamp:2026-03-20 06:49:33.372529317 +0000 UTC m=+5.631840478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.883893 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79efa4256882 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.388613762 +0000 UTC m=+5.647924953,LastTimestamp:2026-03-20 06:49:33.388613762 +0000 UTC m=+5.647924953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.887581 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79efa43cdb70 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.390150512 +0000 UTC m=+5.649461703,LastTimestamp:2026-03-20 06:49:33.390150512 +0000 UTC m=+5.649461703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.893687 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79efb3077eb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.638311608 +0000 UTC m=+5.897622759,LastTimestamp:2026-03-20 06:49:33.638311608 +0000 UTC m=+5.897622759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.899500 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79efb3a6c41c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.648749596 +0000 UTC m=+5.908060787,LastTimestamp:2026-03-20 06:49:33.648749596 +0000 UTC m=+5.908060787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.905805 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-controller-manager-crc.189e79f02dd31f26 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 20 06:50:32 crc kubenswrapper[5136]: body: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:35.698476838 +0000 UTC m=+7.957788019,LastTimestamp:2026-03-20 06:49:35.698476838 +0000 UTC m=+7.957788019,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.912089 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79f02dd49507 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:35.698572551 +0000 UTC m=+7.957883742,LastTimestamp:2026-03-20 06:49:35.698572551 +0000 UTC m=+7.957883742,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.920101 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-apiserver-crc.189e79f1be8a47f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 20 06:50:32 crc kubenswrapper[5136]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 06:50:32 crc kubenswrapper[5136]: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:42.421366769 +0000 UTC m=+14.680677910,LastTimestamp:2026-03-20 06:49:42.421366769 +0000 UTC m=+14.680677910,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.925675 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79f1be8af9af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:42.421412271 +0000 UTC m=+14.680723422,LastTimestamp:2026-03-20 06:49:42.421412271 +0000 UTC m=+14.680723422,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.930561 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e79f1be8a47f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-apiserver-crc.189e79f1be8a47f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 20 06:50:32 crc kubenswrapper[5136]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 06:50:32 crc kubenswrapper[5136]: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:42.421366769 +0000 UTC m=+14.680677910,LastTimestamp:2026-03-20 06:49:42.429685138 +0000 UTC m=+14.688996289,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.936663 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e79f1be8af9af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79f1be8af9af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:42.421412271 +0000 UTC m=+14.680723422,LastTimestamp:2026-03-20 06:49:42.429728139 +0000 UTC m=+14.689039290,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.940747 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-apiserver-crc.189e79f1f4415556 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 20 06:50:32 crc kubenswrapper[5136]: body: [+]ping ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]log ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]etcd ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/generic-apiserver-start-informers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-filter ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-apiextensions-informers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-apiextensions-controllers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/crd-informer-synced ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-system-namespaces-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 20 06:50:32 crc kubenswrapper[5136]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/bootstrap-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-kube-aggregator-informers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-registration-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-discovery-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]autoregister-completion ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-openapi-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: livez check failed Mar 20 06:50:32 crc kubenswrapper[5136]: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:43.322555734 +0000 UTC m=+15.581866885,LastTimestamp:2026-03-20 06:49:43.322555734 +0000 UTC m=+15.581866885,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.946457 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79f1f441dae5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:43.322589925 +0000 UTC m=+15.581901076,LastTimestamp:2026-03-20 06:49:43.322589925 +0000 UTC m=+15.581901076,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.950635 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e79ef2d7f5733\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef2d7f5733 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.398018867 +0000 UTC m=+3.657330028,LastTimestamp:2026-03-20 06:49:43.536722103 +0000 UTC m=+15.796033264,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.956951 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e92298 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 06:50:32 crc kubenswrapper[5136]: body: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.699140248 +0000 UTC m=+17.958451409,LastTimestamp:2026-03-20 06:49:45.699140248 +0000 UTC m=+17.958451409,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.960664 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e9d62c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.69918622 +0000 UTC m=+17.958497381,LastTimestamp:2026-03-20 06:49:45.69918622 +0000 UTC m=+17.958497381,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.972844 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79f281e92298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e92298 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 06:50:32 crc kubenswrapper[5136]: body: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.699140248 +0000 UTC m=+17.958451409,LastTimestamp:2026-03-20 06:49:55.699408977 +0000 UTC m=+27.958720158,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.976846 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79f281e9d62c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e9d62c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.69918622 +0000 UTC m=+17.958497381,LastTimestamp:2026-03-20 06:49:55.69951956 +0000 UTC m=+27.958830751,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.981863 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79f4d62ecd4a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:55.702926666 +0000 UTC m=+27.962237857,LastTimestamp:2026-03-20 06:49:55.702926666 +0000 UTC m=+27.962237857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.986355 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79eec091c540\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eec091c540 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.570510144 +0000 UTC m=+1.829821295,LastTimestamp:2026-03-20 06:49:55.817482932 +0000 UTC m=+28.076794123,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.991985 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79eed175c3f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eed175c3f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.853887477 +0000 UTC m=+2.113198638,LastTimestamp:2026-03-20 06:49:56.00425945 +0000 UTC m=+28.263570601,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.996415 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79eed247922d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eed247922d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.867637293 +0000 UTC m=+2.126948454,LastTimestamp:2026-03-20 06:49:56.013387904 +0000 UTC m=+28.272699045,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:33 crc kubenswrapper[5136]: E0320 06:50:33.003006 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79f281e92298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 06:50:33 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e92298 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 06:50:33 crc kubenswrapper[5136]: body: Mar 20 06:50:33 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.699140248 +0000 UTC m=+17.958451409,LastTimestamp:2026-03-20 06:50:05.699256644 +0000 UTC m=+37.958567835,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:33 crc kubenswrapper[5136]: > Mar 20 06:50:33 crc kubenswrapper[5136]: E0320 06:50:33.008233 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79f281e9d62c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e9d62c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.69918622 +0000 UTC m=+17.958497381,LastTimestamp:2026-03-20 06:50:05.699312445 +0000 UTC m=+37.958623626,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:33 crc kubenswrapper[5136]: E0320 06:50:33.014107 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79f281e92298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 06:50:33 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e92298 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 06:50:33 crc kubenswrapper[5136]: body: Mar 20 06:50:33 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.699140248 +0000 UTC m=+17.958451409,LastTimestamp:2026-03-20 06:50:15.699582253 +0000 UTC m=+47.958893434,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:33 crc kubenswrapper[5136]: > Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.345226 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.768455 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.768621 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.769208 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.770025 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.770234 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.770379 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:34 crc kubenswrapper[5136]: I0320 06:50:34.345614 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:34 crc kubenswrapper[5136]: I0320 06:50:34.703960 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:34 crc kubenswrapper[5136]: I0320 06:50:34.705038 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:34 crc kubenswrapper[5136]: I0320 06:50:34.705112 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:34 crc kubenswrapper[5136]: I0320 06:50:34.705133 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:35 crc kubenswrapper[5136]: I0320 06:50:35.345587 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:36 crc kubenswrapper[5136]: I0320 06:50:36.348910 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.345926 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.851040 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.852292 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.852331 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.852346 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.852397 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:37 crc kubenswrapper[5136]: E0320 06:50:37.856507 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 06:50:37 crc kubenswrapper[5136]: E0320 06:50:37.857027 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.349015 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.396055 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.398011 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.398060 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.398076 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.398890 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:38 crc kubenswrapper[5136]: E0320 06:50:38.478329 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:50:38 crc kubenswrapper[5136]: W0320 06:50:38.506750 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 20 06:50:38 crc kubenswrapper[5136]: E0320 06:50:38.506800 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.717635 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.719610 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2"} Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.719744 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.720904 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.720933 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.720945 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.346167 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.724246 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.724837 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.726472 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" exitCode=255 Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.726499 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2"} Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.726540 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.726679 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.727715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.727751 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.727764 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.730975 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:50:39 crc kubenswrapper[5136]: E0320 06:50:39.731217 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.764880 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.779263 5136 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.092648 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.349474 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.729957 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.731325 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.732059 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.732091 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.732104 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.732592 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:50:40 crc kubenswrapper[5136]: E0320 06:50:40.732772 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:41 crc kubenswrapper[5136]: I0320 06:50:41.347291 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:42 crc kubenswrapper[5136]: I0320 06:50:42.348897 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.345569 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.379457 5136 csr.go:261] certificate signing request csr-9qcnf is approved, waiting to be issued Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.389408 5136 csr.go:257] certificate signing request csr-9qcnf is issued Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.403666 5136 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.914726 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.914935 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.919407 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.919471 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.919486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.922069 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:50:43 crc kubenswrapper[5136]: E0320 06:50:43.922466 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.202118 5136 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.391944 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-03 00:20:41.686041867 +0000 UTC Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.391989 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6929h29m57.294056471s for next certificate rotation Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.857045 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.858233 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.858272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.858285 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.858382 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.868267 5136 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.868548 5136 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.868565 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.873019 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.873109 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.873156 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.873199 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.873223 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:44Z","lastTransitionTime":"2026-03-20T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.893883 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.903711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.903752 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.903761 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.903777 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.903788 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:44Z","lastTransitionTime":"2026-03-20T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.913256 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.919404 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.919437 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.919446 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.919459 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.919470 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:44Z","lastTransitionTime":"2026-03-20T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.932495 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.942850 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.942893 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.942933 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.942948 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.942958 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:44Z","lastTransitionTime":"2026-03-20T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.955119 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.955234 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.955260 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.055483 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.155861 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.257027 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.357185 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.458166 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.558679 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.658837 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: I0320 06:50:45.732502 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:45 crc kubenswrapper[5136]: I0320 06:50:45.732647 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:45 crc kubenswrapper[5136]: I0320 06:50:45.733685 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:45 crc kubenswrapper[5136]: I0320 06:50:45.733750 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:45 crc kubenswrapper[5136]: I0320 06:50:45.733771 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.758926 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.859095 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.959514 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.060649 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.161184 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.261338 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.361888 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.462659 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.563292 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.663865 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.764600 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.865730 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.966030 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.066177 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.167095 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.268277 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.369436 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.470489 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.571288 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.672182 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.772541 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.872659 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:47.973490 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.074312 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.174616 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.275367 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.375596 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.476112 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.479471 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.577005 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.678173 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.779222 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.880153 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.980510 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.080659 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.181920 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.282276 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.382877 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.483541 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.583990 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.685003 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.785884 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: I0320 06:50:49.796060 5136 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.886187 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.987337 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.088346 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: I0320 06:50:50.148436 5136 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.189526 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.290305 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.390913 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.491445 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.592562 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.692757 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.793020 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.893262 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.993777 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.094602 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.194721 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.295915 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: I0320 06:50:51.396135 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.397044 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: I0320 06:50:51.397569 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:51 crc kubenswrapper[5136]: I0320 06:50:51.397620 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:51 crc kubenswrapper[5136]: I0320 06:50:51.397639 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.497156 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.597747 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.698881 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.800002 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.901164 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.002292 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.103267 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.156726 5136 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.206381 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.206458 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.206478 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.206903 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.207119 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.310588 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.310635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.310653 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.310677 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.310693 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.376866 5136 apiserver.go:52] "Watching apiserver" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.382674 5136 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.383067 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.383754 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.383886 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.383968 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.384036 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.384114 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.383883 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.384478 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.384866 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.384953 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.389575 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.389672 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.389737 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.389756 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.389862 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.390077 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.390336 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.390539 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.390962 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.413582 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.413638 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.413656 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.413682 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.413702 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.416494 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.431937 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.444609 5136 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.448144 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.467047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478134 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478197 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478261 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478290 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478320 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478351 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478380 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478448 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478479 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478539 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478569 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478654 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478723 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478759 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478790 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478927 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478958 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478987 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479021 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479053 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479084 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479119 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479151 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479157 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479320 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479159 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479181 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479436 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479456 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479693 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479765 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479850 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479914 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479965 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479939 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480013 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480065 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480116 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480162 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480207 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480266 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480321 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480376 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480424 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480436 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480538 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480598 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480656 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480681 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480713 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480737 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480759 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480784 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480809 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480865 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480887 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480909 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480931 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480956 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480979 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481003 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481037 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481059 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481091 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481115 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481139 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481207 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481231 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481259 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481282 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481304 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481327 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481349 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481395 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481416 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481438 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481464 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481487 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481511 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481533 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481556 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481592 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481614 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481652 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481681 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481709 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481733 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481760 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481785 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481807 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481854 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481878 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481900 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481972 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481996 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482033 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482056 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482079 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482102 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482126 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482150 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482173 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482197 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482221 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482247 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482271 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482293 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482315 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482339 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482362 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482387 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482412 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482435 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482457 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482482 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482516 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482548 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482574 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482648 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482681 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482714 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482752 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482903 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482927 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483066 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483226 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483254 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483282 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480623 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480911 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480782 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480962 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480987 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481061 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481083 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481121 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481117 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483655 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481218 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481361 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481446 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481482 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481523 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481902 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481971 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482216 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482235 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482509 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482801 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483233 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483279 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483734 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.484108 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483913 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.484318 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.484689 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.484898 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485059 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485009 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485401 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485480 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485504 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485912 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486231 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483306 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486386 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486449 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486496 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486539 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486580 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486617 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486656 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486692 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486738 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486792 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486939 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486997 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487050 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487303 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487433 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487495 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487552 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487629 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487691 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487754 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487852 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487966 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488021 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488073 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488134 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488310 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488393 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488617 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488683 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488729 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488767 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488805 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488945 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489000 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489054 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489115 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489171 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489225 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489275 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489329 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489386 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489442 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489526 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489572 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489621 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489673 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489726 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489782 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489935 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489991 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490046 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490105 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490222 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490275 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490333 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490388 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490446 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490505 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490562 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490657 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490734 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490810 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490941 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491051 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491099 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491152 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491190 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491236 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491279 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491320 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491360 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491441 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491593 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491638 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491662 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491685 5136 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491706 5136 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491747 5136 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491790 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491858 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491884 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491907 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491930 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491952 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491994 5136 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492027 5136 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492058 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492087 5136 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492119 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492152 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492185 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492223 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492258 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492290 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492320 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492349 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492374 5136 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492396 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492419 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492441 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492464 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492487 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492512 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492533 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492555 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492577 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492598 5136 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492621 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492646 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492695 5136 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492734 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492765 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492788 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498975 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487224 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487923 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487980 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487997 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488068 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489069 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489489 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489517 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490056 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490080 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490277 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490410 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490689 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491277 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491400 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491444 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491455 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492017 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492366 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492390 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491233 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492700 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492665 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492916 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.493038 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.500088 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.493229 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.493373 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.493768 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.493787 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.494426 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.494513 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.494479 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.494931 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495103 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495194 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495268 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495492 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495612 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495652 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.494895 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.496140 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.496499 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.496582 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497161 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497201 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497233 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497394 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497546 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497541 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497558 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497590 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497931 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498017 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498012 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498069 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498071 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498079 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497888 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498336 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498416 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498573 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498666 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498716 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498935 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498971 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.499120 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.499583 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.500371 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.500544 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:50:53.00051455 +0000 UTC m=+85.259825771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.500992 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.501136 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.501320 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.501345 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.501792 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.501960 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:53.001928488 +0000 UTC m=+85.261239679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.502065 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.502159 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.502509 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.502639 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.502695 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:53.002675182 +0000 UTC m=+85.261986343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.502768 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.503010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.505954 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.506063 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.506497 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.506512 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.506583 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.507170 5136 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.509128 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.509214 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.509566 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.515389 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.517272 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.517360 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.518713 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.519715 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.519797 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.520147 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.520199 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.520196 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.520220 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.520312 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:53.020282885 +0000 UTC m=+85.279594066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.520561 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.520652 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.520675 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.521255 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522200 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522232 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522246 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522290 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522305 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522669 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.523632 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.523884 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.523924 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.524071 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.531037 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.531074 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.531090 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.531085 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.531187 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:53.031166432 +0000 UTC m=+85.290477683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.532998 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.533040 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.533342 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.533407 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.533653 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534038 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534078 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534387 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534626 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.533810 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534732 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534773 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534978 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.535075 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.535934 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536165 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536222 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536392 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536663 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536681 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536857 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536881 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.537338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.537345 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.539603 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.539682 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.540148 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.540455 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.541000 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.541588 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.541684 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.541792 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.541791 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542066 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542343 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542600 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542653 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542781 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542864 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542911 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542926 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.543089 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.543421 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.543543 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.543560 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.543699 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.544251 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.544432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.544545 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.545298 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.545506 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.545880 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.546232 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.546403 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.549879 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.559686 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.566609 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.570045 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.575364 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.593947 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594059 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594065 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594203 5136 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594241 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594251 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594347 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594367 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594381 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594396 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594409 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594421 5136 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594434 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594447 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594460 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594473 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594485 5136 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594497 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594510 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594522 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594534 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594548 5136 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594562 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594574 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594585 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594598 5136 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594612 5136 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594624 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594636 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594648 5136 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594659 5136 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594672 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594683 5136 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594695 5136 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594706 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594719 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594731 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594742 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594755 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594768 5136 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594779 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594790 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594801 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594830 5136 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594843 5136 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594855 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594866 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594877 5136 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594888 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594901 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594915 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594967 5136 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594980 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594992 5136 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595003 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595015 5136 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595027 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595040 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595053 5136 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595066 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595078 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595091 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595103 5136 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595142 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595154 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595166 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595179 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595191 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595203 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595216 5136 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595229 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595243 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595256 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595270 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595283 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595294 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595306 5136 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595318 5136 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595330 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595341 5136 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595354 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595365 5136 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595376 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595388 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595400 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595411 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595422 5136 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595433 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595444 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595456 5136 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595466 5136 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595478 5136 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595490 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595503 5136 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595516 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595528 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595539 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595552 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595563 5136 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595577 5136 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595588 5136 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595600 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595611 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595623 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595636 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595648 5136 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595660 5136 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595672 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595684 5136 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595696 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595709 5136 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595721 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595733 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595745 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595756 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595767 5136 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595779 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595790 5136 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595800 5136 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595834 5136 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595847 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595860 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595871 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595882 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595894 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595905 5136 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595917 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595928 5136 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595942 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595954 5136 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595965 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595976 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595987 5136 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595999 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596009 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596019 5136 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596030 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596042 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596053 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596064 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596076 5136 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596087 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596098 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596109 5136 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596120 5136 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596132 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596144 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596156 5136 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596167 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596179 5136 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596189 5136 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596201 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596213 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596225 5136 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596237 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596248 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596260 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.625686 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.625759 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.625772 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.625789 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.625801 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.706527 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.720947 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: source /etc/kubernetes/apiserver-url.env Mar 20 06:50:52 crc kubenswrapper[5136]: else Mar 20 06:50:52 crc kubenswrapper[5136]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 06:50:52 crc kubenswrapper[5136]: exit 1 Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.722294 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.722304 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.729070 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.729101 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.729110 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.729147 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.729159 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.730376 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: W0320 06:50:52.734550 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-fa291c4c5f41e65eb95b55917ede458ae6bd30c47ca6de7b21c34bb044812050 WatchSource:0}: Error finding container fa291c4c5f41e65eb95b55917ede458ae6bd30c47ca6de7b21c34bb044812050: Status 404 returned error can't find the container with id fa291c4c5f41e65eb95b55917ede458ae6bd30c47ca6de7b21c34bb044812050 Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.738117 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.739312 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 06:50:52 crc kubenswrapper[5136]: W0320 06:50:52.744712 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-38c2f861a45bd75124962230c84212a9e761dffc74406bc887dc863b71c3dc04 WatchSource:0}: Error finding container 38c2f861a45bd75124962230c84212a9e761dffc74406bc887dc863b71c3dc04: Status 404 returned error can't find the container with id 38c2f861a45bd75124962230c84212a9e761dffc74406bc887dc863b71c3dc04 Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.747529 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:50:52 crc kubenswrapper[5136]: set +o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 06:50:52 crc kubenswrapper[5136]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 06:50:52 crc kubenswrapper[5136]: ho_enable="--enable-hybrid-overlay" Mar 20 06:50:52 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 06:50:52 crc kubenswrapper[5136]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 06:50:52 crc kubenswrapper[5136]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-host=127.0.0.1 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-port=9743 \ Mar 20 06:50:52 crc kubenswrapper[5136]: ${ho_enable} \ Mar 20 06:50:52 crc kubenswrapper[5136]: --enable-interconnect \ Mar 20 06:50:52 crc kubenswrapper[5136]: --disable-approver \ Mar 20 06:50:52 crc kubenswrapper[5136]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --wait-for-kubernetes-api=200s \ Mar 20 06:50:52 crc kubenswrapper[5136]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.749367 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:50:52 crc kubenswrapper[5136]: set +o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: Mar 20 06:50:52 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --disable-webhook \ Mar 20 06:50:52 crc kubenswrapper[5136]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.751230 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.761957 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"38c2f861a45bd75124962230c84212a9e761dffc74406bc887dc863b71c3dc04"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.762949 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fa291c4c5f41e65eb95b55917ede458ae6bd30c47ca6de7b21c34bb044812050"} Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.763409 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:50:52 crc kubenswrapper[5136]: set +o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 06:50:52 crc kubenswrapper[5136]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 06:50:52 crc kubenswrapper[5136]: ho_enable="--enable-hybrid-overlay" Mar 20 06:50:52 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 06:50:52 crc kubenswrapper[5136]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 06:50:52 crc kubenswrapper[5136]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-host=127.0.0.1 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-port=9743 \ Mar 20 06:50:52 crc kubenswrapper[5136]: ${ho_enable} \ Mar 20 06:50:52 crc kubenswrapper[5136]: --enable-interconnect \ Mar 20 06:50:52 crc kubenswrapper[5136]: --disable-approver \ Mar 20 06:50:52 crc kubenswrapper[5136]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --wait-for-kubernetes-api=200s \ Mar 20 06:50:52 crc kubenswrapper[5136]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.764420 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b39a0006d55ebea3b75736ad7b0bd012e2e8b0bc106d0b1827465768a33096d9"} Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.764585 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.765270 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:50:52 crc kubenswrapper[5136]: set +o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: Mar 20 06:50:52 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --disable-webhook \ Mar 20 06:50:52 crc kubenswrapper[5136]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.765576 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: source /etc/kubernetes/apiserver-url.env Mar 20 06:50:52 crc kubenswrapper[5136]: else Mar 20 06:50:52 crc kubenswrapper[5136]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 06:50:52 crc kubenswrapper[5136]: exit 1 Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.765804 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.766793 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.766865 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.773035 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.786401 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.796443 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.813558 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.824992 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.831651 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.831842 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.831951 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.832035 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.832115 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.834549 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.847094 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.857169 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.872729 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.886310 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.897774 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.912780 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.935156 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.935198 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.935210 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.935230 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.935243 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.037220 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.037253 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.037261 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.037276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.037287 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.101206 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101302 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:50:54.101279701 +0000 UTC m=+86.360590862 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.101431 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101553 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.101570 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101601 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:54.10159063 +0000 UTC m=+86.360901791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.101634 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.101694 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101865 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101889 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101938 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101956 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101972 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:54.101941012 +0000 UTC m=+86.361252213 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.102065 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.102112 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.102196 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:54.10217101 +0000 UTC m=+86.361482261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.102197 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.102395 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:54.102375167 +0000 UTC m=+86.361686358 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.140342 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.140741 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.140975 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.141192 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.141387 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.244491 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.244536 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.244548 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.244565 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.244577 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.347568 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.347605 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.347613 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.347626 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.347634 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.450723 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.451159 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.451352 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.451532 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.451703 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.553971 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.555175 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.555318 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.555453 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.555592 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.658652 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.658696 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.658706 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.658721 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.658733 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.761515 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.761564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.761576 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.761593 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.761605 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.864640 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.864686 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.864696 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.864713 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.864725 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.967499 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.967774 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.967859 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.967924 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.967985 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.071510 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.071614 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.071636 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.071666 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.071687 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.112038 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.112152 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.112195 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.112230 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.112284 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112496 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112523 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112542 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112554 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112596 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:56.112582148 +0000 UTC m=+88.371893299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112544 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112612 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112666 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:56.11264939 +0000 UTC m=+88.371960571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112615 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112721 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:56.112709052 +0000 UTC m=+88.372020243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112795 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:50:56.112785075 +0000 UTC m=+88.372096256 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.113209 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.113364 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:56.113341043 +0000 UTC m=+88.372652204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.174540 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.174592 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.174603 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.174621 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.174633 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.277254 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.277509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.277607 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.277717 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.277850 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.380552 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.380610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.380627 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.380655 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.380673 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.395922 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.396116 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.396298 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.396487 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.396362 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.396674 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.403130 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.404327 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.407385 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.408738 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.411156 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.412295 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.413567 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.415986 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.417977 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.420130 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.421132 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.422552 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.423565 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.424680 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.425729 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.426777 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.428099 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.428972 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.431116 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.432222 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.432855 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.433517 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.434629 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.435374 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.436344 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.437114 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.438258 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.438864 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.439987 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.440577 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.441149 5136 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.441720 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.444462 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.445679 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.446532 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.448880 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.450266 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.451425 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.454186 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.456456 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.457531 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.458973 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.461151 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.462424 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.463599 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.465064 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.466888 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.468515 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.470776 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.472418 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.474981 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.476423 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.477772 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.479680 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.484419 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.484477 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.484495 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.484520 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.484538 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.588353 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.588419 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.588437 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.588466 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.588483 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.690852 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.690882 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.690892 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.690908 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.690919 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.793418 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.793543 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.793562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.793625 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.793641 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.896162 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.896221 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.896240 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.896262 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.896279 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.998756 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.998802 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.998835 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.998858 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.998872 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.102562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.102599 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.102610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.102643 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.102654 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.145450 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.145495 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.145508 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.145553 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.145566 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.159948 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.165456 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.165522 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.165547 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.165617 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.165643 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.178316 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.183272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.183350 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.183378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.183403 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.183423 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.199420 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.204414 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.204462 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.204483 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.204508 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.204526 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.222048 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.227990 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.228035 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.228057 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.228087 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.228109 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.244722 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.245138 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.247527 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.247735 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.247914 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.248097 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.248249 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.354388 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.354626 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.354689 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.354753 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.354837 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.442603 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.444012 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.444370 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.457400 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.457421 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.457429 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.457439 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.457449 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.560716 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.560776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.560802 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.560865 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.560893 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.663676 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.663736 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.663754 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.663778 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.663795 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.766978 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.767327 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.767412 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.767493 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.767562 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.772300 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.772686 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.870776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.870872 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.870893 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.870919 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.870938 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.973271 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.973513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.973595 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.973685 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.973805 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.076713 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.077117 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.077258 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.077438 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.077608 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.128441 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.128526 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.128559 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.128586 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.128616 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128671 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:51:00.128639233 +0000 UTC m=+92.387950394 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128680 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128718 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128733 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128747 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128788 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:00.128760177 +0000 UTC m=+92.388071368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128804 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128843 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:00.128803199 +0000 UTC m=+92.388114390 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128872 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:00.128861541 +0000 UTC m=+92.388172692 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128937 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128999 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.129019 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.129117 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:00.129087168 +0000 UTC m=+92.388398379 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.181226 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.181528 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.181859 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.182025 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.182154 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.287318 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.287648 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.287799 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.287977 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.288101 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.390378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.390575 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.390743 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.390855 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.390954 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.395737 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.395754 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.395761 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.396112 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.396172 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.396219 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.401568 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.493124 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.493374 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.493488 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.493593 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.493663 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.595880 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.595911 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.595921 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.595934 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.595945 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.699285 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.699336 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.699354 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.699376 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.699394 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.802314 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.802342 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.802352 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.802365 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.802375 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.904897 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.904956 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.904968 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.904994 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.905037 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.007991 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.008056 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.008080 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.008112 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.008133 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.111207 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.111260 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.111279 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.111303 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.111321 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.214647 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.215022 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.215180 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.215324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.215467 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.317744 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.317800 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.317858 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.317887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.317909 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.420946 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.421010 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.421033 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.421063 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.421084 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.524337 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.524473 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.524503 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.524533 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.524554 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.626733 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.626801 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.626858 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.626887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.626908 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.730733 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.730863 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.730891 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.730920 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.730949 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.833671 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.833709 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.833722 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.833740 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.833753 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.936218 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.936276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.936289 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.936305 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.936316 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.039111 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.039183 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.039209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.039256 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.039280 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.141536 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.141575 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.141584 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.141600 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.141609 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.245329 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.245372 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.245381 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.245409 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.245421 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.348917 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.348978 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.348994 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.349051 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.349091 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.396709 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.396716 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.396957 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:58 crc kubenswrapper[5136]: E0320 06:50:58.397048 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:50:58 crc kubenswrapper[5136]: E0320 06:50:58.396844 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:50:58 crc kubenswrapper[5136]: E0320 06:50:58.397298 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.408979 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.419258 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.433109 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.445515 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.452071 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.452134 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.452155 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.452184 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.452208 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.456341 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.468964 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.482943 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.500426 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.554454 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.554508 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.554525 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.554549 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.554567 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.657859 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.657909 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.657922 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.657940 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.657952 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.760119 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.760184 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.760196 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.760214 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.760228 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.862480 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.862544 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.862567 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.862596 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.862620 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.965486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.965553 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.965576 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.965602 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.965628 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.068862 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.068926 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.068948 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.068975 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.068997 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.171873 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.171967 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.171988 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.172015 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.172034 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.274535 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.274592 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.274611 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.274636 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.274654 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.377154 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.377196 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.377207 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.377224 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.377236 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.480431 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.480571 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.480609 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.480639 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.480662 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.583948 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.584024 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.584049 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.584076 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.584099 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.686864 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.686902 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.686913 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.686929 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.686940 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.789247 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.789303 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.789321 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.789343 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.789361 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.891629 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.891663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.891691 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.891703 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.891712 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.995142 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.995192 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.995209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.995230 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.995248 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.097510 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.097573 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.097596 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.097625 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.097646 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.167624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.167732 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.167964 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.167976 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168032 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:51:08.167992827 +0000 UTC m=+100.427304008 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.168105 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.168208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168211 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:08.168176753 +0000 UTC m=+100.427487944 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168231 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168332 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:08.168300707 +0000 UTC m=+100.427611908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168233 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168395 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168401 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168420 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168426 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168446 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168503 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:08.168473793 +0000 UTC m=+100.427784974 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168533 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:08.168520495 +0000 UTC m=+100.427831686 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.200464 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.200509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.200522 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.200542 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.200555 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.303493 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.303559 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.303583 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.303621 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.303668 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.396000 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.396082 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.396131 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.396000 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.396230 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.396408 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.405421 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.405450 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.405460 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.405472 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.405483 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.508866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.508938 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.508965 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.508997 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.509018 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.612520 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.612563 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.612582 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.612604 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.612622 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.715116 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.715168 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.715178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.715190 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.715200 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.817960 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.818012 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.818029 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.818052 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.818071 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.920518 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.920576 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.920594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.920627 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.920645 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.022374 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.022446 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.022464 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.022489 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.022507 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.124663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.124702 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.124713 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.124730 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.124743 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.227700 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.227766 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.227784 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.227810 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.227870 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.330747 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.330769 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.330776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.330787 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.330795 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.433652 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.433702 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.433720 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.433741 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.433781 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.536695 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.536752 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.536776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.536804 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.536861 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.640305 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.640361 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.640378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.640399 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.640415 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.742763 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.742808 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.742839 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.742857 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.742869 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.845512 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.845572 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.845594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.845625 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.845650 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.948590 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.948651 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.948670 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.948692 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.948709 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.052526 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.052567 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.052577 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.052594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.052607 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.155471 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.155532 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.155550 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.155578 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.155597 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.258959 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.259031 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.259049 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.259073 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.259094 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.380045 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.380094 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.380110 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.380135 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.380153 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.396193 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.396356 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:02 crc kubenswrapper[5136]: E0320 06:51:02.396417 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.396467 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:02 crc kubenswrapper[5136]: E0320 06:51:02.396867 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:02 crc kubenswrapper[5136]: E0320 06:51:02.396976 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.483371 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.483460 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.483516 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.483541 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.483561 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.586810 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.586931 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.586959 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.586997 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.587022 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.690073 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.690127 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.690150 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.690177 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.690198 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.792558 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.792611 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.792632 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.792658 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.792680 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.895983 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.896092 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.896116 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.896183 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.896205 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.998358 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.998396 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.998410 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.998429 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.998444 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.100732 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.100784 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.100808 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.100866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.100898 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.203297 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.203352 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.203363 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.203375 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.203383 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.306387 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.306451 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.306472 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.306503 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.306524 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.409637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.409702 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.409726 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.409757 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.409777 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.512466 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.512509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.512527 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.512552 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.512568 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.615618 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.615687 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.615709 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.615737 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.615758 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.718928 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.719002 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.719024 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.719051 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.719075 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.821761 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.821803 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.821831 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.821848 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.821859 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.924453 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.924513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.924530 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.924553 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.924572 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.026844 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.027128 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.027211 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.027304 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.027387 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.129336 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.129384 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.129396 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.129414 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.129427 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.231610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.231662 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.231678 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.231701 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.231716 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.333324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.333402 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.333425 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.333454 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.333478 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.395980 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.396006 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:04 crc kubenswrapper[5136]: E0320 06:51:04.396135 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:04 crc kubenswrapper[5136]: E0320 06:51:04.396324 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.396440 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:04 crc kubenswrapper[5136]: E0320 06:51:04.396602 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.435544 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.435592 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.435605 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.435622 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.435635 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.538119 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.538151 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.538159 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.538171 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.538185 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.640244 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.640296 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.640305 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.640319 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.640329 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.742967 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.743008 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.743016 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.743031 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.743041 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.845024 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.845100 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.845123 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.845154 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.845181 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.947579 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.947640 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.947662 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.947691 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.947710 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.050533 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.050593 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.050610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.050635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.050654 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.153473 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.153543 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.153566 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.153597 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.153619 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.256021 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.256449 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.256803 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.257170 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.257419 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.360052 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.360116 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.360135 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.360162 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.360180 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.398770 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.398974 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:05 crc kubenswrapper[5136]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 06:51:05 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:05 crc kubenswrapper[5136]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 06:51:05 crc kubenswrapper[5136]: source /etc/kubernetes/apiserver-url.env Mar 20 06:51:05 crc kubenswrapper[5136]: else Mar 20 06:51:05 crc kubenswrapper[5136]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 06:51:05 crc kubenswrapper[5136]: exit 1 Mar 20 06:51:05 crc kubenswrapper[5136]: fi Mar 20 06:51:05 crc kubenswrapper[5136]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 06:51:05 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:05 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.399995 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.400055 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.463143 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.463222 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.463245 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.463276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.463298 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.566875 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.566993 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.567019 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.567049 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.567070 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.574640 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.574692 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.575079 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.575125 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.575151 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.604215 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.612348 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.612408 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.612429 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.612456 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.612477 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.643252 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.647082 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.647211 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.647273 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.647338 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.647412 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.663062 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.667618 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.667668 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.667683 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.667702 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.667714 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.678015 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.681743 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.681904 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.682200 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.682308 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.682389 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.691026 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.691184 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.692542 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.692637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.692696 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.692753 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.692830 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.795028 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.795180 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.795258 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.795321 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.795376 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.897502 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.897548 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.897560 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.897576 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.897587 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.000532 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.000587 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.000607 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.000630 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.000648 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.103607 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.103687 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.103711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.103739 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.103761 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.207248 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.207632 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.207771 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.207974 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.208115 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.310439 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.311043 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.311246 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.311477 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.311685 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.396543 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.396626 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.396716 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.396730 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.396899 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.397027 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.399280 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:06 crc kubenswrapper[5136]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:51:06 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:51:06 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:06 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:51:06 crc kubenswrapper[5136]: set +o allexport Mar 20 06:51:06 crc kubenswrapper[5136]: fi Mar 20 06:51:06 crc kubenswrapper[5136]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 06:51:06 crc kubenswrapper[5136]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 06:51:06 crc kubenswrapper[5136]: ho_enable="--enable-hybrid-overlay" Mar 20 06:51:06 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 06:51:06 crc kubenswrapper[5136]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 06:51:06 crc kubenswrapper[5136]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 06:51:06 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:51:06 crc kubenswrapper[5136]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 06:51:06 crc kubenswrapper[5136]: --webhook-host=127.0.0.1 \ Mar 20 06:51:06 crc kubenswrapper[5136]: --webhook-port=9743 \ Mar 20 06:51:06 crc kubenswrapper[5136]: ${ho_enable} \ Mar 20 06:51:06 crc kubenswrapper[5136]: --enable-interconnect \ Mar 20 06:51:06 crc kubenswrapper[5136]: --disable-approver \ Mar 20 06:51:06 crc kubenswrapper[5136]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 06:51:06 crc kubenswrapper[5136]: --wait-for-kubernetes-api=200s \ Mar 20 06:51:06 crc kubenswrapper[5136]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 06:51:06 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:51:06 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:06 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.402457 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:06 crc kubenswrapper[5136]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:51:06 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:51:06 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:06 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:51:06 crc kubenswrapper[5136]: set +o allexport Mar 20 06:51:06 crc kubenswrapper[5136]: fi Mar 20 06:51:06 crc kubenswrapper[5136]: Mar 20 06:51:06 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 06:51:06 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:51:06 crc kubenswrapper[5136]: --disable-webhook \ Mar 20 06:51:06 crc kubenswrapper[5136]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 06:51:06 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:51:06 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:06 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.404468 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.414663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.414986 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.415179 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.415457 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.415753 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.518432 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.518911 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.519124 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.519330 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.519524 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.622654 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.622711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.622733 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.622760 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.622781 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.725455 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.725508 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.725529 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.725551 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.725567 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.828789 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.828894 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.828913 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.828938 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.828956 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.932354 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.932427 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.932444 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.932473 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.932493 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.035861 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.035935 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.035953 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.035981 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.035998 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.139755 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.139865 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.139891 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.139922 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.139946 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.245372 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.245410 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.245422 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.245438 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.245449 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.347871 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.347937 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.347955 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.347977 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.347995 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.450499 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.450545 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.450557 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.450572 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.450582 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.553572 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.553637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.553656 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.553679 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.553697 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.657309 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.657407 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.657431 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.657462 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.657485 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.761005 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.761060 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.761075 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.761096 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.761112 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.863593 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.863635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.863646 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.863662 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.863673 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.967272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.967332 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.967350 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.967375 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.967395 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.069801 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.070704 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.070926 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.071117 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.071306 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.174301 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.174360 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.174378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.174402 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.174422 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.242857 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.242931 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.242955 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.242973 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.242995 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243057 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:51:24.243017573 +0000 UTC m=+116.502328764 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243094 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243109 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243100 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243209 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243244 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:24.24320719 +0000 UTC m=+116.502518381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243120 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243273 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:24.243258882 +0000 UTC m=+116.502570073 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243300 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243363 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243391 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243311 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:24.243297943 +0000 UTC m=+116.502609134 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243501 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:24.243466699 +0000 UTC m=+116.502777900 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.277928 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.277987 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.278010 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.278041 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.278066 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.380467 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.380520 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.380536 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.380559 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.380575 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.395899 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.395871 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.396010 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.396417 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.396634 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.397036 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.397312 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.397536 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.412001 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.430654 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.441588 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.454097 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.467435 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.479453 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.483363 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.483550 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.483701 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.483887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.484042 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.496160 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.506163 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.587223 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.587282 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.587294 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.587309 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.587319 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.691116 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.691172 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.691190 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.691213 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.691230 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.793930 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.794003 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.794025 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.794056 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.794078 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.901389 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.901464 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.901485 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.901511 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.901531 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.004564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.004636 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.004659 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.004685 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.004702 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.107410 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.107490 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.107518 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.107551 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.107575 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.209599 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.209875 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.209892 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.209909 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.209921 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.312721 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.312764 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.312776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.312792 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.312803 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.416288 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.416358 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.416380 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.416413 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.416437 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.520075 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.520153 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.520170 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.520194 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.520266 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.623633 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.623702 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.623725 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.623754 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.623777 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.727209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.727300 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.727324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.727353 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.727371 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.830347 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.830413 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.830436 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.830466 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.830488 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.933735 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.933799 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.933852 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.933884 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.933905 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.036551 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.036631 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.036656 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.036687 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.036709 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.139917 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.139990 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.140013 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.140044 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.140066 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.243026 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.243070 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.243081 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.243098 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.243110 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.345751 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.345811 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.345867 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.345890 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.345904 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.395888 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.395917 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.395895 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:10 crc kubenswrapper[5136]: E0320 06:51:10.396112 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:10 crc kubenswrapper[5136]: E0320 06:51:10.396262 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:10 crc kubenswrapper[5136]: E0320 06:51:10.396411 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.448412 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.448486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.448511 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.448541 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.448564 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.551261 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.551324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.551350 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.551378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.551400 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.654679 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.654739 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.654756 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.654780 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.654798 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.757635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.757676 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.757688 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.757711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.757724 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.859690 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.860123 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.860272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.860432 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.860574 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.964257 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.964320 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.964339 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.964363 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.964381 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.068016 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.068087 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.068110 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.068137 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.068180 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.171250 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.171319 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.171341 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.171375 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.171397 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.274433 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.274757 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.274875 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.274970 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.275062 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.377737 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.378225 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.378479 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.378723 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.378951 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.481435 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.481497 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.481514 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.481537 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.481554 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.583917 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.583959 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.583974 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.583995 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.584010 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.686935 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.686977 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.686989 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.687006 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.687018 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.790235 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.790623 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.790770 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.790961 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.791175 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.893435 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.893691 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.893863 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.894020 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.894151 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.996939 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.997005 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.997023 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.997045 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.997063 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.100252 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.100314 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.100333 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.100355 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.100371 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.203499 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.203547 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.203564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.203588 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.203605 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.307435 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.307481 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.307512 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.307531 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.307543 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.395637 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.395749 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:12 crc kubenswrapper[5136]: E0320 06:51:12.395796 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.395639 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:12 crc kubenswrapper[5136]: E0320 06:51:12.396546 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:12 crc kubenswrapper[5136]: E0320 06:51:12.396546 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.408833 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.408866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.408874 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.408885 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.408895 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.512023 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.512383 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.512580 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.512733 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.512961 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.616266 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.616569 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.616595 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.616619 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.616636 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.719951 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.720024 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.720041 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.720114 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.720154 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.822182 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.822230 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.822247 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.822270 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.822287 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.926082 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.926148 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.926166 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.926635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.926692 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.030078 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.030394 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.030487 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.030596 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.030687 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.133272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.133672 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.133929 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.134136 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.134362 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.238026 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.238095 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.238112 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.238135 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.238152 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.341050 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.341119 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.341153 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.341178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.341196 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.443456 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.443492 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.443500 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.443512 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.443521 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.545869 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.545913 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.545923 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.545938 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.545948 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.648153 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.648222 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.648232 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.648244 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.648253 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.750902 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.750954 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.750973 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.750996 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.751013 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.852975 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.853221 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.853286 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.853368 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.853457 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.956175 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.956232 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.956243 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.956259 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.956270 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.059176 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.059217 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.059226 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.059240 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.059251 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.161443 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.161482 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.161494 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.161509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.161519 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.263491 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.263530 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.263542 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.263562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.263573 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.366093 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.366157 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.366178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.366212 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.366246 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.395701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:14 crc kubenswrapper[5136]: E0320 06:51:14.395850 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.395963 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.396099 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:14 crc kubenswrapper[5136]: E0320 06:51:14.396269 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:14 crc kubenswrapper[5136]: E0320 06:51:14.396384 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.469389 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.469490 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.469516 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.469540 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.469557 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.572685 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.572715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.572723 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.572736 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.572746 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.675423 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.675485 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.675505 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.675528 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.675546 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.778677 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.778734 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.778756 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.778784 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.778805 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.882104 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.882511 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.882660 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.882807 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.882983 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.986261 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.986298 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.986307 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.986321 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.986331 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.090194 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.090564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.090927 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.091239 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.091530 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.128465 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-pt4jb"] Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.129436 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.132880 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.133791 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.137227 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.149595 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.167293 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.181806 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.195710 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.196074 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.196257 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.196403 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.196596 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.200292 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.206293 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a27959f-3f41-4683-87d6-7b2a9210d634-hosts-file\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.206546 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njj48\" (UniqueName: \"kubernetes.io/projected/4a27959f-3f41-4683-87d6-7b2a9210d634-kube-api-access-njj48\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.212965 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.229489 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.243167 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.259869 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.271375 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.300383 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.300487 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.300564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.300637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.300660 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.307920 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a27959f-3f41-4683-87d6-7b2a9210d634-hosts-file\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.308153 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njj48\" (UniqueName: \"kubernetes.io/projected/4a27959f-3f41-4683-87d6-7b2a9210d634-kube-api-access-njj48\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.308170 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a27959f-3f41-4683-87d6-7b2a9210d634-hosts-file\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.339005 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njj48\" (UniqueName: \"kubernetes.io/projected/4a27959f-3f41-4683-87d6-7b2a9210d634-kube-api-access-njj48\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.403803 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.403900 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.403929 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.403960 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.403983 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.456449 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.486389 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:15 crc kubenswrapper[5136]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 20 06:51:15 crc kubenswrapper[5136]: set -uo pipefail Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 20 06:51:15 crc kubenswrapper[5136]: HOSTS_FILE="/etc/hosts" Mar 20 06:51:15 crc kubenswrapper[5136]: TEMP_FILE="/etc/hosts.tmp" Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Make a temporary file with the old hosts file's attributes. Mar 20 06:51:15 crc kubenswrapper[5136]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 20 06:51:15 crc kubenswrapper[5136]: echo "Failed to preserve hosts file. Exiting." Mar 20 06:51:15 crc kubenswrapper[5136]: exit 1 Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: while true; do Mar 20 06:51:15 crc kubenswrapper[5136]: declare -A svc_ips Mar 20 06:51:15 crc kubenswrapper[5136]: for svc in "${services[@]}"; do Mar 20 06:51:15 crc kubenswrapper[5136]: # Fetch service IP from cluster dns if present. We make several tries Mar 20 06:51:15 crc kubenswrapper[5136]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 20 06:51:15 crc kubenswrapper[5136]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 20 06:51:15 crc kubenswrapper[5136]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 20 06:51:15 crc kubenswrapper[5136]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 20 06:51:15 crc kubenswrapper[5136]: for i in ${!cmds[*]} Mar 20 06:51:15 crc kubenswrapper[5136]: do Mar 20 06:51:15 crc kubenswrapper[5136]: ips=($(eval "${cmds[i]}")) Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: svc_ips["${svc}"]="${ips[@]}" Mar 20 06:51:15 crc kubenswrapper[5136]: break Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Update /etc/hosts only if we get valid service IPs Mar 20 06:51:15 crc kubenswrapper[5136]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 20 06:51:15 crc kubenswrapper[5136]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 20 06:51:15 crc kubenswrapper[5136]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 20 06:51:15 crc kubenswrapper[5136]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: continue Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Append resolver entries for services Mar 20 06:51:15 crc kubenswrapper[5136]: rc=0 Mar 20 06:51:15 crc kubenswrapper[5136]: for svc in "${!svc_ips[@]}"; do Mar 20 06:51:15 crc kubenswrapper[5136]: for ip in ${svc_ips[${svc}]}; do Mar 20 06:51:15 crc kubenswrapper[5136]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ $rc -ne 0 ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: continue Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 20 06:51:15 crc kubenswrapper[5136]: # Replace /etc/hosts with our modified version if needed Mar 20 06:51:15 crc kubenswrapper[5136]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 20 06:51:15 crc kubenswrapper[5136]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: unset svc_ips Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njj48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-pt4jb_openshift-dns(4a27959f-3f41-4683-87d6-7b2a9210d634): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:15 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.489616 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-pt4jb" podUID="4a27959f-3f41-4683-87d6-7b2a9210d634" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.494238 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jst28"] Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.494616 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.497143 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.497414 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.497483 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.499149 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.499512 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.501326 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dbsfs"] Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.507690 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-tjpps"] Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.508176 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.508866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.508953 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.508976 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.509012 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.509032 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.509890 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-rootfs\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.510018 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94xgp\" (UniqueName: \"kubernetes.io/projected/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-kube-api-access-94xgp\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.510145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-mcd-auth-proxy-config\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.510196 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-proxy-tls\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.510846 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.511114 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.512044 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.512332 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.513647 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.513928 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.514122 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.514459 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.515700 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.531135 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.543021 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.563585 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.573896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.585749 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.596056 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.605189 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610682 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-os-release\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610733 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlfxh\" (UniqueName: \"kubernetes.io/projected/263c5427-a835-40c6-93cb-4bb66a83ea5b-kube-api-access-dlfxh\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610777 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-rootfs\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610842 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-bin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610951 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-rootfs\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610955 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-k8s-cni-cncf-io\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94xgp\" (UniqueName: \"kubernetes.io/projected/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-kube-api-access-94xgp\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611142 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-cnibin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611184 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-daemon-config\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611410 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-cnibin\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611521 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-multus-certs\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611629 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-cni-binary-copy\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611711 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-os-release\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611749 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611788 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611807 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611868 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611762 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-proxy-tls\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611888 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611975 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612042 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-hostroot\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612097 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612156 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-socket-dir-parent\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612208 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612243 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd92w\" (UniqueName: \"kubernetes.io/projected/059eafe0-4e83-486d-b958-992b00aa0878-kube-api-access-wd92w\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612279 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-conf-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612312 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-system-cni-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612367 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-system-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612419 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-kubelet\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612453 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-etc-kubernetes\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612489 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612555 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-multus\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612607 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-mcd-auth-proxy-config\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612649 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-netns\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.613358 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.614115 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-mcd-auth-proxy-config\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.618182 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-proxy-tls\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.621556 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.632380 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.639991 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94xgp\" (UniqueName: \"kubernetes.io/projected/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-kube-api-access-94xgp\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.646335 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.658692 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.670266 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.680838 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.689127 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.704747 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713267 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-multus-certs\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713320 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-cnibin\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713347 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713375 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-cni-binary-copy\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713399 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-os-release\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713400 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-multus-certs\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713423 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-hostroot\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713557 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713539 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-os-release\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713593 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-cnibin\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713469 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-hostroot\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713639 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-socket-dir-parent\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713709 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713694 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713763 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-conf-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713848 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-conf-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713809 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-system-cni-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713724 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-socket-dir-parent\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713907 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-system-cni-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714214 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd92w\" (UniqueName: \"kubernetes.io/projected/059eafe0-4e83-486d-b958-992b00aa0878-kube-api-access-wd92w\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714245 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-system-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714262 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-kubelet\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714276 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-etc-kubernetes\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714301 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-multus\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714319 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714346 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-netns\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714364 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-os-release\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlfxh\" (UniqueName: \"kubernetes.io/projected/263c5427-a835-40c6-93cb-4bb66a83ea5b-kube-api-access-dlfxh\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714397 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-bin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714417 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-k8s-cni-cncf-io\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714444 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-cnibin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714463 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-daemon-config\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714564 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714725 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-system-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714858 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-etc-kubernetes\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714888 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-multus\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714896 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-kubelet\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714925 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-k8s-cni-cncf-io\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714926 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-netns\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714962 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-bin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714973 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-cnibin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714973 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-cni-binary-copy\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715009 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-os-release\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715711 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715957 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715974 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715981 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.716018 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.716034 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.716512 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-daemon-config\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.727717 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.727805 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.727869 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.727895 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.727944 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.728579 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.733928 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlfxh\" (UniqueName: \"kubernetes.io/projected/263c5427-a835-40c6-93cb-4bb66a83ea5b-kube-api-access-dlfxh\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.741874 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.741859 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.743937 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd92w\" (UniqueName: \"kubernetes.io/projected/059eafe0-4e83-486d-b958-992b00aa0878-kube-api-access-wd92w\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.747183 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.747235 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.747253 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.747277 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.747294 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.756897 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.757223 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.761636 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.761670 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.761681 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.761697 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.761708 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.770184 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.772692 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.775923 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.775994 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.776025 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.776061 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.776084 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.785248 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.789018 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.789119 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.789178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.789255 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.789323 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.796844 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.796993 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818156 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818190 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818202 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818222 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818234 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818762 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pt4jb" event={"ID":"4a27959f-3f41-4683-87d6-7b2a9210d634","Type":"ContainerStarted","Data":"83d9ea6f9a6599f452980d42ef9dc9d13a2ed55fa322cb6c3d5f855143803506"} Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.820250 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:15 crc kubenswrapper[5136]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 20 06:51:15 crc kubenswrapper[5136]: set -uo pipefail Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 20 06:51:15 crc kubenswrapper[5136]: HOSTS_FILE="/etc/hosts" Mar 20 06:51:15 crc kubenswrapper[5136]: TEMP_FILE="/etc/hosts.tmp" Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Make a temporary file with the old hosts file's attributes. Mar 20 06:51:15 crc kubenswrapper[5136]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 20 06:51:15 crc kubenswrapper[5136]: echo "Failed to preserve hosts file. Exiting." Mar 20 06:51:15 crc kubenswrapper[5136]: exit 1 Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: while true; do Mar 20 06:51:15 crc kubenswrapper[5136]: declare -A svc_ips Mar 20 06:51:15 crc kubenswrapper[5136]: for svc in "${services[@]}"; do Mar 20 06:51:15 crc kubenswrapper[5136]: # Fetch service IP from cluster dns if present. We make several tries Mar 20 06:51:15 crc kubenswrapper[5136]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 20 06:51:15 crc kubenswrapper[5136]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 20 06:51:15 crc kubenswrapper[5136]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 20 06:51:15 crc kubenswrapper[5136]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 20 06:51:15 crc kubenswrapper[5136]: for i in ${!cmds[*]} Mar 20 06:51:15 crc kubenswrapper[5136]: do Mar 20 06:51:15 crc kubenswrapper[5136]: ips=($(eval "${cmds[i]}")) Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: svc_ips["${svc}"]="${ips[@]}" Mar 20 06:51:15 crc kubenswrapper[5136]: break Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Update /etc/hosts only if we get valid service IPs Mar 20 06:51:15 crc kubenswrapper[5136]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 20 06:51:15 crc kubenswrapper[5136]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 20 06:51:15 crc kubenswrapper[5136]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 20 06:51:15 crc kubenswrapper[5136]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: continue Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Append resolver entries for services Mar 20 06:51:15 crc kubenswrapper[5136]: rc=0 Mar 20 06:51:15 crc kubenswrapper[5136]: for svc in "${!svc_ips[@]}"; do Mar 20 06:51:15 crc kubenswrapper[5136]: for ip in ${svc_ips[${svc}]}; do Mar 20 06:51:15 crc kubenswrapper[5136]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ $rc -ne 0 ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: continue Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 20 06:51:15 crc kubenswrapper[5136]: # Replace /etc/hosts with our modified version if needed Mar 20 06:51:15 crc kubenswrapper[5136]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 20 06:51:15 crc kubenswrapper[5136]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: unset svc_ips Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njj48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-pt4jb_openshift-dns(4a27959f-3f41-4683-87d6-7b2a9210d634): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:15 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.820987 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.821353 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-pt4jb" podUID="4a27959f-3f41-4683-87d6-7b2a9210d634" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.829240 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.832107 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: W0320 06:51:15.834930 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf64ebce8_37f2_4631_9b8b_d34ebc9b93ba.slice/crio-075a96b005188740a40783675493adaee0253fb7bd1c86fd69929f3b319276f7 WatchSource:0}: Error finding container 075a96b005188740a40783675493adaee0253fb7bd1c86fd69929f3b319276f7: Status 404 returned error can't find the container with id 075a96b005188740a40783675493adaee0253fb7bd1c86fd69929f3b319276f7 Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.835703 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.837719 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: W0320 06:51:15.838630 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod263c5427_a835_40c6_93cb_4bb66a83ea5b.slice/crio-18004b385b788eb1cc3a9afac0160c58ea75a8e7f77ca5f5520deed36cd9c1b5 WatchSource:0}: Error finding container 18004b385b788eb1cc3a9afac0160c58ea75a8e7f77ca5f5520deed36cd9c1b5: Status 404 returned error can't find the container with id 18004b385b788eb1cc3a9afac0160c58ea75a8e7f77ca5f5520deed36cd9c1b5 Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.842214 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.843417 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.843742 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.843952 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:15 crc kubenswrapper[5136]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 20 06:51:15 crc kubenswrapper[5136]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 20 06:51:15 crc kubenswrapper[5136]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlfxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-tjpps_openshift-multus(263c5427-a835-40c6-93cb-4bb66a83ea5b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:15 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.845511 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-tjpps" podUID="263c5427-a835-40c6-93cb-4bb66a83ea5b" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.846775 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbmbh"] Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.847880 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: W0320 06:51:15.849727 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod059eafe0_4e83_486d_b958_992b00aa0878.slice/crio-0dbfbfc98637ffc99d695ec77ba68c86041d535157c89bb27cf3986d0c5ba8d4 WatchSource:0}: Error finding container 0dbfbfc98637ffc99d695ec77ba68c86041d535157c89bb27cf3986d0c5ba8d4: Status 404 returned error can't find the container with id 0dbfbfc98637ffc99d695ec77ba68c86041d535157c89bb27cf3986d0c5ba8d4 Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851023 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851300 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851335 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851363 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851312 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851567 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851571 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851898 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.852404 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wd92w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-dbsfs_openshift-multus(059eafe0-4e83-486d-b958-992b00aa0878): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.853546 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" podUID="059eafe0-4e83-486d-b958-992b00aa0878" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.866015 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.875953 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.883726 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.894331 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.903791 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.913615 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916074 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916110 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916155 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916195 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916254 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916275 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916325 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916348 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916390 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916412 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916433 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916517 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrnqr\" (UniqueName: \"kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916581 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916646 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916682 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916732 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916751 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916772 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916850 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921451 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921703 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921721 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921744 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921764 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.930772 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.944305 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.955731 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.968917 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.975784 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.995157 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.005970 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.017919 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.017955 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.017973 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.017989 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018048 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018066 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018066 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018084 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018184 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018244 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018271 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018320 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018347 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018414 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrnqr\" (UniqueName: \"kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018462 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018483 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018504 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018549 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018569 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018590 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018702 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018714 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018736 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018757 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018768 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018783 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018792 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018838 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018840 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018874 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019015 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019043 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019064 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019085 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019106 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019128 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019610 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.020984 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.023701 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.025314 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.025350 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.025360 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.025377 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.025387 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.036392 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrnqr\" (UniqueName: \"kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.039115 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.047300 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.058715 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.066627 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.078620 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.089896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.105410 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.128361 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.128426 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.128443 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.128467 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.128481 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.165938 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: W0320 06:51:16.186083 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod963bf1ca_b871_4cad_a1fc_cf829a70a81a.slice/crio-fc8a676d87b2c6b9273e55ddfc0af4b456dbdcc2adee4a1bbfceebb87273789e WatchSource:0}: Error finding container fc8a676d87b2c6b9273e55ddfc0af4b456dbdcc2adee4a1bbfceebb87273789e: Status 404 returned error can't find the container with id fc8a676d87b2c6b9273e55ddfc0af4b456dbdcc2adee4a1bbfceebb87273789e Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.189596 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:16 crc kubenswrapper[5136]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 20 06:51:16 crc kubenswrapper[5136]: apiVersion: v1 Mar 20 06:51:16 crc kubenswrapper[5136]: clusters: Mar 20 06:51:16 crc kubenswrapper[5136]: - cluster: Mar 20 06:51:16 crc kubenswrapper[5136]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 20 06:51:16 crc kubenswrapper[5136]: server: https://api-int.crc.testing:6443 Mar 20 06:51:16 crc kubenswrapper[5136]: name: default-cluster Mar 20 06:51:16 crc kubenswrapper[5136]: contexts: Mar 20 06:51:16 crc kubenswrapper[5136]: - context: Mar 20 06:51:16 crc kubenswrapper[5136]: cluster: default-cluster Mar 20 06:51:16 crc kubenswrapper[5136]: namespace: default Mar 20 06:51:16 crc kubenswrapper[5136]: user: default-auth Mar 20 06:51:16 crc kubenswrapper[5136]: name: default-context Mar 20 06:51:16 crc kubenswrapper[5136]: current-context: default-context Mar 20 06:51:16 crc kubenswrapper[5136]: kind: Config Mar 20 06:51:16 crc kubenswrapper[5136]: preferences: {} Mar 20 06:51:16 crc kubenswrapper[5136]: users: Mar 20 06:51:16 crc kubenswrapper[5136]: - name: default-auth Mar 20 06:51:16 crc kubenswrapper[5136]: user: Mar 20 06:51:16 crc kubenswrapper[5136]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 06:51:16 crc kubenswrapper[5136]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 06:51:16 crc kubenswrapper[5136]: EOF Mar 20 06:51:16 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrnqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:16 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.190896 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.232217 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.232545 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.232700 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.232909 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.233126 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.336552 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.336607 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.336623 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.336648 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.336665 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.396652 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.396746 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.396885 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.396895 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.397034 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.397214 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.438978 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.439040 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.439064 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.439095 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.439120 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.541139 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.541192 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.541233 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.541269 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.541291 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.644784 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.644883 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.644924 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.644961 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.644980 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.747656 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.747704 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.747715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.747733 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.747745 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.822690 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerStarted","Data":"18004b385b788eb1cc3a9afac0160c58ea75a8e7f77ca5f5520deed36cd9c1b5"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.824236 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"075a96b005188740a40783675493adaee0253fb7bd1c86fd69929f3b319276f7"} Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.825573 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:16 crc kubenswrapper[5136]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 20 06:51:16 crc kubenswrapper[5136]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 20 06:51:16 crc kubenswrapper[5136]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlfxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-tjpps_openshift-multus(263c5427-a835-40c6-93cb-4bb66a83ea5b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:16 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.825998 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"fc8a676d87b2c6b9273e55ddfc0af4b456dbdcc2adee4a1bbfceebb87273789e"} Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.826372 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.826750 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-tjpps" podUID="263c5427-a835-40c6-93cb-4bb66a83ea5b" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.827530 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerStarted","Data":"0dbfbfc98637ffc99d695ec77ba68c86041d535157c89bb27cf3986d0c5ba8d4"} Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.828532 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:16 crc kubenswrapper[5136]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 20 06:51:16 crc kubenswrapper[5136]: apiVersion: v1 Mar 20 06:51:16 crc kubenswrapper[5136]: clusters: Mar 20 06:51:16 crc kubenswrapper[5136]: - cluster: Mar 20 06:51:16 crc kubenswrapper[5136]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 20 06:51:16 crc kubenswrapper[5136]: server: https://api-int.crc.testing:6443 Mar 20 06:51:16 crc kubenswrapper[5136]: name: default-cluster Mar 20 06:51:16 crc kubenswrapper[5136]: contexts: Mar 20 06:51:16 crc kubenswrapper[5136]: - context: Mar 20 06:51:16 crc kubenswrapper[5136]: cluster: default-cluster Mar 20 06:51:16 crc kubenswrapper[5136]: namespace: default Mar 20 06:51:16 crc kubenswrapper[5136]: user: default-auth Mar 20 06:51:16 crc kubenswrapper[5136]: name: default-context Mar 20 06:51:16 crc kubenswrapper[5136]: current-context: default-context Mar 20 06:51:16 crc kubenswrapper[5136]: kind: Config Mar 20 06:51:16 crc kubenswrapper[5136]: preferences: {} Mar 20 06:51:16 crc kubenswrapper[5136]: users: Mar 20 06:51:16 crc kubenswrapper[5136]: - name: default-auth Mar 20 06:51:16 crc kubenswrapper[5136]: user: Mar 20 06:51:16 crc kubenswrapper[5136]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 06:51:16 crc kubenswrapper[5136]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 06:51:16 crc kubenswrapper[5136]: EOF Mar 20 06:51:16 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrnqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:16 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.829120 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.829567 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wd92w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-dbsfs_openshift-multus(059eafe0-4e83-486d-b958-992b00aa0878): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.831216 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" podUID="059eafe0-4e83-486d-b958-992b00aa0878" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.831246 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.832510 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.843297 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.850490 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.850528 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.850539 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.850556 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.850568 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.862993 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.878868 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.888986 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.908794 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.931161 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.946789 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.954138 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.954195 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.954213 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.954239 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.954257 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.974036 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.990178 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.007683 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.024772 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.046932 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.057225 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.057265 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.057282 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.057305 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.057323 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.062675 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.080337 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.094336 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.118603 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.131965 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.145860 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.161090 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.161126 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.161138 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.161156 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.161170 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.165296 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.176738 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.190228 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.206284 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.222351 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.237532 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.249586 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.264521 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.264583 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.264610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.264643 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.264668 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.269295 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.367564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.367620 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.367637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.367661 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.367679 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: E0320 06:51:17.398012 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:17 crc kubenswrapper[5136]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 06:51:17 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:17 crc kubenswrapper[5136]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 06:51:17 crc kubenswrapper[5136]: source /etc/kubernetes/apiserver-url.env Mar 20 06:51:17 crc kubenswrapper[5136]: else Mar 20 06:51:17 crc kubenswrapper[5136]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 06:51:17 crc kubenswrapper[5136]: exit 1 Mar 20 06:51:17 crc kubenswrapper[5136]: fi Mar 20 06:51:17 crc kubenswrapper[5136]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 06:51:17 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:17 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:17 crc kubenswrapper[5136]: E0320 06:51:17.399199 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.470650 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.470706 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.470722 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.470747 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.470785 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.574392 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.574441 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.574458 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.574480 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.574496 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.677405 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.677469 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.677486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.677509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.677527 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.780041 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.780081 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.780092 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.780109 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.780122 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.882377 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.882436 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.882454 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.882479 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.882498 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.985183 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.985242 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.985260 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.985285 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.985306 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.087756 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.087802 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.087842 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.087857 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.087866 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.191732 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.191799 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.191847 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.191873 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.191890 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.294754 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.294855 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.294873 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.294899 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.294916 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.396516 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.396551 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.397039 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.397106 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.397274 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.397368 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.399478 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:18 crc kubenswrapper[5136]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:51:18 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:51:18 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:18 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:51:18 crc kubenswrapper[5136]: set +o allexport Mar 20 06:51:18 crc kubenswrapper[5136]: fi Mar 20 06:51:18 crc kubenswrapper[5136]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 06:51:18 crc kubenswrapper[5136]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 06:51:18 crc kubenswrapper[5136]: ho_enable="--enable-hybrid-overlay" Mar 20 06:51:18 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 06:51:18 crc kubenswrapper[5136]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 06:51:18 crc kubenswrapper[5136]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 06:51:18 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:51:18 crc kubenswrapper[5136]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 06:51:18 crc kubenswrapper[5136]: --webhook-host=127.0.0.1 \ Mar 20 06:51:18 crc kubenswrapper[5136]: --webhook-port=9743 \ Mar 20 06:51:18 crc kubenswrapper[5136]: ${ho_enable} \ Mar 20 06:51:18 crc kubenswrapper[5136]: --enable-interconnect \ Mar 20 06:51:18 crc kubenswrapper[5136]: --disable-approver \ Mar 20 06:51:18 crc kubenswrapper[5136]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 06:51:18 crc kubenswrapper[5136]: --wait-for-kubernetes-api=200s \ Mar 20 06:51:18 crc kubenswrapper[5136]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 06:51:18 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:51:18 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:18 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.400509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.400554 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.400579 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.400606 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.400627 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.402331 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:18 crc kubenswrapper[5136]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:51:18 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:51:18 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:18 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:51:18 crc kubenswrapper[5136]: set +o allexport Mar 20 06:51:18 crc kubenswrapper[5136]: fi Mar 20 06:51:18 crc kubenswrapper[5136]: Mar 20 06:51:18 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 06:51:18 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:51:18 crc kubenswrapper[5136]: --disable-webhook \ Mar 20 06:51:18 crc kubenswrapper[5136]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 06:51:18 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:51:18 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:18 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.403540 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.413186 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.425672 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.440097 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.450991 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.466252 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.474729 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.486218 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.503870 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.504200 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.504416 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.504581 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.504743 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.515676 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.531566 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.548364 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.560156 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.570330 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.581896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.607432 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.607489 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.607509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.607534 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.607552 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.709848 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.709910 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.709928 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.709953 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.709971 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.812635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.812677 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.812693 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.812718 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.812735 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.914873 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.914928 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.914943 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.914963 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.914978 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.018660 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.018721 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.018737 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.018762 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.018777 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.120763 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.120795 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.120804 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.120841 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.120854 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.222761 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.222795 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.222802 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.222836 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.222848 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.326149 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.326216 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.326239 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.326266 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.326289 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.398560 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:51:19 crc kubenswrapper[5136]: E0320 06:51:19.399089 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:51:19 crc kubenswrapper[5136]: E0320 06:51:19.399495 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:19 crc kubenswrapper[5136]: E0320 06:51:19.400630 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.428732 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.428791 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.428855 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.428889 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.428912 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.531858 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.531891 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.531901 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.531915 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.531925 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.634832 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.634902 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.634916 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.634936 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.634953 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.738853 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.738921 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.738940 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.738967 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.739026 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.841991 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.842055 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.842073 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.842099 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.842118 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.945661 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.946272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.946417 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.946556 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.947147 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.051106 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.051163 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.051175 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.051193 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.051204 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.154326 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.154429 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.154452 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.154487 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.154510 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.258230 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.258272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.258282 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.258320 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.258333 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.361422 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.361789 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.361987 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.362133 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.362270 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.396003 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.396201 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:20 crc kubenswrapper[5136]: E0320 06:51:20.396381 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.396438 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:20 crc kubenswrapper[5136]: E0320 06:51:20.396622 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:20 crc kubenswrapper[5136]: E0320 06:51:20.396879 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.466082 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.466162 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.466181 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.466214 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.466235 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.569715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.569791 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.569845 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.569877 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.569900 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.673188 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.673263 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.673285 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.673313 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.673331 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.776859 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.776911 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.776925 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.776946 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.776959 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.879670 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.879706 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.879715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.879730 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.879740 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.983111 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.983178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.983192 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.983209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.983220 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.085997 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.086056 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.086073 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.086098 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.086114 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.188519 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.188589 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.188614 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.188658 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.188687 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.291197 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.291257 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.291271 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.291290 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.291300 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.393790 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.393887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.393908 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.393940 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.393965 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.495849 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.495880 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.495891 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.495906 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.495917 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.597707 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.597776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.597800 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.597869 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.597891 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.700894 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.700954 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.700974 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.701007 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.701029 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.715624 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-g5hkc"] Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.716154 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.719107 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.719501 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.721058 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.721139 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.741019 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.751154 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.767085 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.782902 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.784357 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9076e831-6703-4014-9b7d-eb438a0b62f3-host\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.784432 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9076e831-6703-4014-9b7d-eb438a0b62f3-serviceca\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.784692 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpcd\" (UniqueName: \"kubernetes.io/projected/9076e831-6703-4014-9b7d-eb438a0b62f3-kube-api-access-hbpcd\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.796052 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.804046 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.804109 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.804135 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.804165 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.804189 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.812950 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.829414 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.843940 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.868125 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.879661 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.885589 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbpcd\" (UniqueName: \"kubernetes.io/projected/9076e831-6703-4014-9b7d-eb438a0b62f3-kube-api-access-hbpcd\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.885645 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9076e831-6703-4014-9b7d-eb438a0b62f3-host\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.885675 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9076e831-6703-4014-9b7d-eb438a0b62f3-serviceca\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.885889 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9076e831-6703-4014-9b7d-eb438a0b62f3-host\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.886776 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9076e831-6703-4014-9b7d-eb438a0b62f3-serviceca\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.894944 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.906478 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbpcd\" (UniqueName: \"kubernetes.io/projected/9076e831-6703-4014-9b7d-eb438a0b62f3-kube-api-access-hbpcd\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907118 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907152 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907164 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907181 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907192 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907534 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.921108 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.931047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.009224 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.009292 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.009310 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.009335 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.009352 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.037445 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:22 crc kubenswrapper[5136]: W0320 06:51:22.051487 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9076e831_6703_4014_9b7d_eb438a0b62f3.slice/crio-037090a33bb13d8e1de8ae6b0921acca55263501456e7c44289924e71f635278 WatchSource:0}: Error finding container 037090a33bb13d8e1de8ae6b0921acca55263501456e7c44289924e71f635278: Status 404 returned error can't find the container with id 037090a33bb13d8e1de8ae6b0921acca55263501456e7c44289924e71f635278 Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.054647 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:22 crc kubenswrapper[5136]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 20 06:51:22 crc kubenswrapper[5136]: while [ true ]; Mar 20 06:51:22 crc kubenswrapper[5136]: do Mar 20 06:51:22 crc kubenswrapper[5136]: for f in $(ls /tmp/serviceca); do Mar 20 06:51:22 crc kubenswrapper[5136]: echo $f Mar 20 06:51:22 crc kubenswrapper[5136]: ca_file_path="/tmp/serviceca/${f}" Mar 20 06:51:22 crc kubenswrapper[5136]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 20 06:51:22 crc kubenswrapper[5136]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 20 06:51:22 crc kubenswrapper[5136]: if [ -e "${reg_dir_path}" ]; then Mar 20 06:51:22 crc kubenswrapper[5136]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 20 06:51:22 crc kubenswrapper[5136]: else Mar 20 06:51:22 crc kubenswrapper[5136]: mkdir $reg_dir_path Mar 20 06:51:22 crc kubenswrapper[5136]: cp $ca_file_path $reg_dir_path/ca.crt Mar 20 06:51:22 crc kubenswrapper[5136]: fi Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: for d in $(ls /etc/docker/certs.d); do Mar 20 06:51:22 crc kubenswrapper[5136]: echo $d Mar 20 06:51:22 crc kubenswrapper[5136]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 20 06:51:22 crc kubenswrapper[5136]: reg_conf_path="/tmp/serviceca/${dp}" Mar 20 06:51:22 crc kubenswrapper[5136]: if [ ! -e "${reg_conf_path}" ]; then Mar 20 06:51:22 crc kubenswrapper[5136]: rm -rf /etc/docker/certs.d/$d Mar 20 06:51:22 crc kubenswrapper[5136]: fi Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: sleep 60 & wait ${!} Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbpcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-g5hkc_openshift-image-registry(9076e831-6703-4014-9b7d-eb438a0b62f3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:22 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.056410 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-g5hkc" podUID="9076e831-6703-4014-9b7d-eb438a0b62f3" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.112437 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.112491 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.112513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.112543 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.112566 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.215840 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.215917 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.215936 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.216316 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.216523 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.320123 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.320170 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.320188 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.320211 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.320301 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.396190 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.396226 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.396196 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.396390 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.396475 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.396609 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.423036 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.423101 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.423119 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.423142 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.423160 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.525849 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.526694 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.527041 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.527262 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.527445 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.630180 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.630244 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.630267 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.630297 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.630318 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.733325 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.733378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.733390 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.733408 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.733419 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.836580 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.836630 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.836642 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.836663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.836675 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.844696 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g5hkc" event={"ID":"9076e831-6703-4014-9b7d-eb438a0b62f3","Type":"ContainerStarted","Data":"037090a33bb13d8e1de8ae6b0921acca55263501456e7c44289924e71f635278"} Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.846571 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:22 crc kubenswrapper[5136]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 20 06:51:22 crc kubenswrapper[5136]: while [ true ]; Mar 20 06:51:22 crc kubenswrapper[5136]: do Mar 20 06:51:22 crc kubenswrapper[5136]: for f in $(ls /tmp/serviceca); do Mar 20 06:51:22 crc kubenswrapper[5136]: echo $f Mar 20 06:51:22 crc kubenswrapper[5136]: ca_file_path="/tmp/serviceca/${f}" Mar 20 06:51:22 crc kubenswrapper[5136]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 20 06:51:22 crc kubenswrapper[5136]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 20 06:51:22 crc kubenswrapper[5136]: if [ -e "${reg_dir_path}" ]; then Mar 20 06:51:22 crc kubenswrapper[5136]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 20 06:51:22 crc kubenswrapper[5136]: else Mar 20 06:51:22 crc kubenswrapper[5136]: mkdir $reg_dir_path Mar 20 06:51:22 crc kubenswrapper[5136]: cp $ca_file_path $reg_dir_path/ca.crt Mar 20 06:51:22 crc kubenswrapper[5136]: fi Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: for d in $(ls /etc/docker/certs.d); do Mar 20 06:51:22 crc kubenswrapper[5136]: echo $d Mar 20 06:51:22 crc kubenswrapper[5136]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 20 06:51:22 crc kubenswrapper[5136]: reg_conf_path="/tmp/serviceca/${dp}" Mar 20 06:51:22 crc kubenswrapper[5136]: if [ ! -e "${reg_conf_path}" ]; then Mar 20 06:51:22 crc kubenswrapper[5136]: rm -rf /etc/docker/certs.d/$d Mar 20 06:51:22 crc kubenswrapper[5136]: fi Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: sleep 60 & wait ${!} Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbpcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-g5hkc_openshift-image-registry(9076e831-6703-4014-9b7d-eb438a0b62f3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:22 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.855926 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-g5hkc" podUID="9076e831-6703-4014-9b7d-eb438a0b62f3" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.867914 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.881561 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.906783 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.924720 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.938793 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.938870 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.938888 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.938914 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.938931 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.945256 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.965841 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.979897 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.994303 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.008025 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.017892 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.031550 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.039419 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.041276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.041313 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.041325 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.041345 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.041357 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.047894 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.058314 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.143795 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.143863 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.143873 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.143887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.143898 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.246672 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.246690 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.246698 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.246708 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.246716 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.349206 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.349248 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.349262 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.349281 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.349293 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.452208 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.452248 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.452259 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.452275 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.452287 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.555380 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.555450 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.555466 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.555488 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.555504 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.658600 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.658657 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.658674 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.658698 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.658715 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.761643 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.761679 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.761691 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.761705 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.761714 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.864100 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.864161 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.864182 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.864205 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.864222 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.968112 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.968182 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.968209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.968239 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.968262 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.070728 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.070786 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.070805 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.070865 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.070884 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.173486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.173549 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.173562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.173578 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.173590 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.276184 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.276251 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.276276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.276305 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.276327 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.312638 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.312751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.312790 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.312849 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.312894 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.312998 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:51:56.312963718 +0000 UTC m=+148.572274909 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.312997 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313051 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313073 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313074 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313089 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313100 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313094 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313113 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:56.313092602 +0000 UTC m=+148.572403793 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313114 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313222 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:56.313191166 +0000 UTC m=+148.572502357 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313285 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:56.313246158 +0000 UTC m=+148.572557399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313310 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:56.313300619 +0000 UTC m=+148.572611880 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.379628 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.379697 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.379721 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.379749 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.379769 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.395961 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.396033 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.396054 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.396169 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.396374 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.396697 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.406107 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.482567 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.482617 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.482634 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.482660 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.482678 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.585127 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.585167 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.585180 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.585197 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.585208 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.687580 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.687616 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.687625 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.687639 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.687647 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.789845 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.789916 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.789928 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.789956 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.789969 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.891857 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.891901 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.891911 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.891925 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.891936 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.995756 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.995851 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.995887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.995915 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.995953 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.099391 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.099441 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.099453 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.099472 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.099483 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.202715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.202769 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.202781 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.202800 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.202831 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.305677 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.305730 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.305746 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.305768 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.305786 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.408531 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.408585 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.408595 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.408609 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.408620 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.512005 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.512072 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.512095 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.512126 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.512149 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.615408 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.615470 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.615488 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.615511 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.615529 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.718569 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.718638 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.718660 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.718688 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.718710 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.821128 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.821167 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.821178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.821194 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.821206 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.923599 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.923637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.923645 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.923659 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.923669 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.978721 5136 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.025461 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.025594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.025806 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.025850 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.025862 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.056643 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.056682 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.056690 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.056704 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.056715 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.065406 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.068389 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.068418 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.068427 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.068440 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.068448 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.076633 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.079835 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.079868 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.079876 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.079888 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.079896 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.089910 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.093627 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.093668 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.093682 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.093704 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.093720 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.102197 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.104558 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.104604 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.104616 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.104632 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.104644 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.113091 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.113241 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.128705 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.128746 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.128762 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.128784 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.128798 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.231243 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.231298 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.231309 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.231326 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.231335 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.334330 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.334366 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.334376 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.334392 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.334405 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.396077 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.396165 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.396110 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.396272 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.396359 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.396498 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.440587 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.440639 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.440667 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.440690 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.440710 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.543763 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.543806 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.543845 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.543866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.543880 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.646597 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.646650 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.646663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.646683 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.646702 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.749071 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.749100 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.749108 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.749120 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.749129 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.851283 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.851337 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.851349 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.851365 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.851376 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.954261 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.954296 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.954304 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.954317 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.954327 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.056524 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.056581 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.056594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.056613 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.056628 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.158738 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.158807 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.158858 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.158880 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.158897 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.262932 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.263009 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.263027 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.263048 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.263064 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.366804 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.366922 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.366941 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.366968 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.366987 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.470539 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.470795 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.470804 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.470840 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.470850 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.487902 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt"] Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.488374 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.491058 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.491190 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.504744 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.529592 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.540240 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.545172 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.545228 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.545402 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.545471 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92zk\" (UniqueName: \"kubernetes.io/projected/36fc020e-a22e-4bde-90c1-4e52cdefde58-kube-api-access-v92zk\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.549715 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.557008 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.566471 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.572956 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.572979 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.572987 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.573001 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.573012 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.575419 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.587011 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.602970 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.613595 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.626682 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.636928 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.644873 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.646843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.646915 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.646949 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.646970 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92zk\" (UniqueName: \"kubernetes.io/projected/36fc020e-a22e-4bde-90c1-4e52cdefde58-kube-api-access-v92zk\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.647720 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.647912 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.651317 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.656292 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.664907 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92zk\" (UniqueName: \"kubernetes.io/projected/36fc020e-a22e-4bde-90c1-4e52cdefde58-kube-api-access-v92zk\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675680 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675806 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675890 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675908 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675934 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675952 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.684354 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.779401 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.779436 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.779447 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.779464 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.779475 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.806950 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: W0320 06:51:27.825371 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36fc020e_a22e_4bde_90c1_4e52cdefde58.slice/crio-148d70d7e18cf66c3d8907f976b18c3a3bf279100b85c3a32493b9a8a499f8f8 WatchSource:0}: Error finding container 148d70d7e18cf66c3d8907f976b18c3a3bf279100b85c3a32493b9a8a499f8f8: Status 404 returned error can't find the container with id 148d70d7e18cf66c3d8907f976b18c3a3bf279100b85c3a32493b9a8a499f8f8 Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.860652 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" event={"ID":"36fc020e-a22e-4bde-90c1-4e52cdefde58","Type":"ContainerStarted","Data":"148d70d7e18cf66c3d8907f976b18c3a3bf279100b85c3a32493b9a8a499f8f8"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.863057 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f" exitCode=0 Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.863115 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.874680 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.882292 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.882469 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.882568 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.882600 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.882620 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.885609 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.894063 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.908495 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.923468 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.933920 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.945782 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.956674 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.967954 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.981652 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.986193 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.986231 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.986243 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.986287 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.986303 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.993298 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.001877 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.009858 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.022999 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.031060 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.040481 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.088616 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.088663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.088675 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.088691 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.088703 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:28Z","lastTransitionTime":"2026-03-20T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.191724 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.191753 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.191762 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.191931 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.191949 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:28Z","lastTransitionTime":"2026-03-20T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.193954 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jz6hg"] Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.194415 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.194557 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.208297 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.219716 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.235075 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.247125 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.254890 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58tdm\" (UniqueName: \"kubernetes.io/projected/b5572feb-df7d-4f3a-9b83-3be3de943668-kube-api-access-58tdm\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.254947 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.256887 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.268176 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.280697 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.293707 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.296778 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.296806 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.296836 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.296850 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.296859 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:28Z","lastTransitionTime":"2026-03-20T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.301758 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.309930 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.317739 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.323476 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.333420 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.343291 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.354355 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.355741 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58tdm\" (UniqueName: \"kubernetes.io/projected/b5572feb-df7d-4f3a-9b83-3be3de943668-kube-api-access-58tdm\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.355925 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.356097 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.356164 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:28.856148291 +0000 UTC m=+121.115459442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.364150 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.370672 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58tdm\" (UniqueName: \"kubernetes.io/projected/b5572feb-df7d-4f3a-9b83-3be3de943668-kube-api-access-58tdm\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.372865 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.396209 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.396321 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.396643 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.396725 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.396799 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.396912 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.397063 5136 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.407925 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.417620 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.433083 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.443950 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.456419 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.466710 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.474108 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.487505 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.490931 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.497646 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.511973 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.519691 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.527484 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.535561 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.545433 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.554084 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.560871 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.569107 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.859245 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.859447 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.859526 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:29.859508242 +0000 UTC m=+122.118819393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.872310 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" event={"ID":"36fc020e-a22e-4bde-90c1-4e52cdefde58","Type":"ContainerStarted","Data":"e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.872361 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" event={"ID":"36fc020e-a22e-4bde-90c1-4e52cdefde58","Type":"ContainerStarted","Data":"d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.874198 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerStarted","Data":"84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.876670 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80" exitCode=0 Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.876737 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.884354 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.894915 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.904177 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.917252 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.927303 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.939111 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.951318 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.964084 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.979588 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.991902 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.005150 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.015968 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.024577 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.034133 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.043801 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.052764 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.062958 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.073159 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.083729 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.092276 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.108458 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.119742 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.139101 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.180643 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.216850 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.255235 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.300752 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.334440 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.373943 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.396173 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:29 crc kubenswrapper[5136]: E0320 06:51:29.396631 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.414949 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.457367 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.497458 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.537920 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.578652 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.870716 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:29 crc kubenswrapper[5136]: E0320 06:51:29.870959 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:29 crc kubenswrapper[5136]: E0320 06:51:29.871099 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:31.871071179 +0000 UTC m=+124.130382350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.882159 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d" exitCode=0 Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.882225 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d"} Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.884748 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3"} Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.884804 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9"} Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.899930 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.922537 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.935503 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.947738 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.961650 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.971604 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.986441 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.998132 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.022678 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.033209 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.045723 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.063310 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.095515 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.142320 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.180158 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.215892 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.256583 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.302461 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.334740 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.378790 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.396711 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.396718 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:30 crc kubenswrapper[5136]: E0320 06:51:30.396973 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.397005 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:30 crc kubenswrapper[5136]: E0320 06:51:30.397217 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:30 crc kubenswrapper[5136]: E0320 06:51:30.397660 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.415436 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.456601 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.499899 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.534005 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.576480 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.616294 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.653718 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.696644 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.733557 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.776873 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.815398 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.854942 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.891769 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d" exitCode=0 Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.891857 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d"} Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.909025 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.938485 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.975265 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.018450 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.059255 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.093774 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.144251 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.174896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.218606 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.260173 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.299984 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.343347 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.378070 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.395647 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:31 crc kubenswrapper[5136]: E0320 06:51:31.395838 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.422431 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.456162 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.495762 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.536140 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.581246 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.619718 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.893644 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:31 crc kubenswrapper[5136]: E0320 06:51:31.893779 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:31 crc kubenswrapper[5136]: E0320 06:51:31.893859 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:35.8938418 +0000 UTC m=+128.153152951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.897985 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pt4jb" event={"ID":"4a27959f-3f41-4683-87d6-7b2a9210d634","Type":"ContainerStarted","Data":"ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8"} Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.902334 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f" exitCode=0 Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.902384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f"} Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.904211 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07"} Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.915104 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.925951 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.937326 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.946975 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.954637 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.965595 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.978080 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.991873 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.002884 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.016326 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.056729 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.096391 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.160891 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.184236 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.220702 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.256913 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.299620 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.344512 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.382570 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.396316 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.396365 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.396441 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:32 crc kubenswrapper[5136]: E0320 06:51:32.396440 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:32 crc kubenswrapper[5136]: E0320 06:51:32.396542 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:32 crc kubenswrapper[5136]: E0320 06:51:32.396609 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.421693 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.465483 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.496106 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.538333 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.581500 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.618384 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.665873 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.702238 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.736491 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.777257 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.819966 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.857228 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.901596 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.909844 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" exitCode=0 Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.909936 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.915486 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66" exitCode=0 Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.915709 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66"} Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.935692 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.980584 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.018387 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.057165 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.095843 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.159898 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.175657 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.215161 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.255192 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.297552 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.340915 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.377689 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.395900 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:33 crc kubenswrapper[5136]: E0320 06:51:33.396030 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.416047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.464429 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: E0320 06:51:33.492030 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.496513 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.538778 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.576037 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.617871 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.658148 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.920770 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.925865 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerStarted","Data":"dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.927457 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.927514 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931640 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931665 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931676 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931684 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931694 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.938267 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.953869 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.964159 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.980866 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.992152 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.999614 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.008907 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.015468 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.024283 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.056873 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.096047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.135350 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.177495 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.218167 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.257674 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.302633 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.337276 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.379846 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.396579 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.396844 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:34 crc kubenswrapper[5136]: E0320 06:51:34.396844 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.396873 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:34 crc kubenswrapper[5136]: E0320 06:51:34.396910 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:34 crc kubenswrapper[5136]: E0320 06:51:34.396987 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.397527 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.426223 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.458199 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.500511 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.539950 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.581073 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.618437 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.660141 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.699215 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.739767 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.776379 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.829166 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.859883 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.901168 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.937652 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.939855 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477"} Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.940357 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.941852 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.982114 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.026485 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.058232 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.099852 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.138399 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.181646 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.217871 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.258969 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.300479 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.337712 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.381782 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.396555 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:35 crc kubenswrapper[5136]: E0320 06:51:35.396627 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.430222 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.460667 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.497964 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.540535 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.583112 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.640668 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.676918 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.700547 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.937063 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:35 crc kubenswrapper[5136]: E0320 06:51:35.937433 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:35 crc kubenswrapper[5136]: E0320 06:51:35.937594 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:43.937573472 +0000 UTC m=+136.196884633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.944542 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g5hkc" event={"ID":"9076e831-6703-4014-9b7d-eb438a0b62f3","Type":"ContainerStarted","Data":"766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f"} Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.948482 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.960235 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.970664 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.985580 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.002404 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.015263 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.024770 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.039201 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.053529 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.067618 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.099185 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.144635 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.185656 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.219725 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.257198 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.301448 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.364998 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.382229 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.396615 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.396609 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.396847 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.396637 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.396974 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.397225 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.482143 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.482190 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.482202 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.482222 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.482235 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:36Z","lastTransitionTime":"2026-03-20T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.496060 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.504503 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.504546 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.504559 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.504578 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.504627 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:36Z","lastTransitionTime":"2026-03-20T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.516103 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.519589 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.519621 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.519635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.519654 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.519666 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:36Z","lastTransitionTime":"2026-03-20T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.530285 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.533473 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.533505 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.533513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.533528 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.533539 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:36Z","lastTransitionTime":"2026-03-20T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.543665 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.547562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.547601 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.547612 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.547630 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.547641 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:36Z","lastTransitionTime":"2026-03-20T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.558661 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.558772 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.396399 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:37 crc kubenswrapper[5136]: E0320 06:51:37.396557 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.961027 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf"} Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.961357 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.961371 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.977109 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:37Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.990493 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.004150 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.017051 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.031033 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.043343 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.056493 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.072521 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.087704 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.102017 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.111357 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.120750 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.133510 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.141679 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.153885 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.163326 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.174117 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.190047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.203746 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.212209 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.222636 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.233299 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.243220 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.253039 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.260941 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.270104 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.282402 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.293726 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.305162 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.320254 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.330035 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.341198 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.352769 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.363428 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.382633 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.395851 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.395901 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:38 crc kubenswrapper[5136]: E0320 06:51:38.395959 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:38 crc kubenswrapper[5136]: E0320 06:51:38.396027 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.396059 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:38 crc kubenswrapper[5136]: E0320 06:51:38.396204 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.408998 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.418417 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.430445 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.440694 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.454319 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.464941 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.474686 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.486622 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: E0320 06:51:38.492411 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.505085 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.516075 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.532245 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.543525 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.556015 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.572647 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.586506 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.604730 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.615407 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.965273 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.985225 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.997130 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.005149 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.014122 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.024001 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.034539 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.042678 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.056879 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.065208 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.081487 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.094981 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.105351 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.123003 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.134791 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.143885 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.153950 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.164879 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.177431 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.395994 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:39 crc kubenswrapper[5136]: E0320 06:51:39.396120 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.397032 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:40 crc kubenswrapper[5136]: E0320 06:51:40.397163 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.397040 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.397218 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:40 crc kubenswrapper[5136]: E0320 06:51:40.397256 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:40 crc kubenswrapper[5136]: E0320 06:51:40.397364 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.973136 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/0.log" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.977011 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf" exitCode=1 Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.977070 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf"} Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.977765 5136 scope.go:117] "RemoveContainer" containerID="8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.995973 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:40Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.017171 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:40Z\\\",\\\"message\\\":\\\"0493 7163 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 06:51:40.240539 7163 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240619 7163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240685 7163 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240802 7163 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:40.241011 7163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.241512 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:51:40.241538 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:51:40.241560 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 06:51:40.241564 7163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:51:40.241585 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:51:40.241607 7163 factory.go:656] Stopping watch factory\\\\nI0320 06:51:40.241622 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0320 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.030357 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.043848 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.054776 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.068047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.083307 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.101147 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.118625 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.131146 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.144628 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.161176 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.171536 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.186444 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.195358 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.206048 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.216232 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.396520 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:41 crc kubenswrapper[5136]: E0320 06:51:41.396677 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.982044 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/1.log" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.982552 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/0.log" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.984714 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" exitCode=1 Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.984765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627"} Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.984808 5136 scope.go:117] "RemoveContainer" containerID="8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.985617 5136 scope.go:117] "RemoveContainer" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" Mar 20 06:51:41 crc kubenswrapper[5136]: E0320 06:51:41.985786 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.001917 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.011533 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.022418 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.035056 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.047998 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.061197 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.072447 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.082536 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.093321 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.104800 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.114718 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.131457 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.141648 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.157589 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.169414 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.181645 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.199794 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:40Z\\\",\\\"message\\\":\\\"0493 7163 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 06:51:40.240539 7163 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240619 7163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240685 7163 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240802 7163 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:40.241011 7163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.241512 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:51:40.241538 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:51:40.241560 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 06:51:40.241564 7163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:51:40.241585 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:51:40.241607 7163 factory.go:656] Stopping watch factory\\\\nI0320 06:51:40.241622 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0320 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.396443 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.396513 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.396467 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:42 crc kubenswrapper[5136]: E0320 06:51:42.396671 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:42 crc kubenswrapper[5136]: E0320 06:51:42.396777 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:42 crc kubenswrapper[5136]: E0320 06:51:42.396903 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.991360 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/1.log" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.996753 5136 scope.go:117] "RemoveContainer" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" Mar 20 06:51:42 crc kubenswrapper[5136]: E0320 06:51:42.997280 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.017868 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.033298 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.053175 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.071767 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.093110 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.158049 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.172366 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.186494 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.199432 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.211661 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.225711 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.238779 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.252067 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.263158 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.273179 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.284720 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.295418 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.395973 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:43 crc kubenswrapper[5136]: E0320 06:51:43.396126 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:43 crc kubenswrapper[5136]: E0320 06:51:43.494405 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:44 crc kubenswrapper[5136]: I0320 06:51:44.031040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:44 crc kubenswrapper[5136]: E0320 06:51:44.031250 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:44 crc kubenswrapper[5136]: E0320 06:51:44.031551 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:52:00.031523997 +0000 UTC m=+152.290835188 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:44 crc kubenswrapper[5136]: I0320 06:51:44.396229 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:44 crc kubenswrapper[5136]: E0320 06:51:44.396406 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:44 crc kubenswrapper[5136]: I0320 06:51:44.396235 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:44 crc kubenswrapper[5136]: E0320 06:51:44.396512 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:44 crc kubenswrapper[5136]: I0320 06:51:44.396526 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:44 crc kubenswrapper[5136]: E0320 06:51:44.396756 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:45 crc kubenswrapper[5136]: I0320 06:51:45.396540 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:45 crc kubenswrapper[5136]: E0320 06:51:45.396725 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.166704 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.167524 5136 scope.go:117] "RemoveContainer" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.167657 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.396281 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.396631 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.396450 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.397204 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.396293 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.397376 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.939513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.939551 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.939562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.939578 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.939589 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:46Z","lastTransitionTime":"2026-03-20T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.959964 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:46Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.968177 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.968237 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.968255 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.968279 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.968297 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:46Z","lastTransitionTime":"2026-03-20T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.983254 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:46Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.987272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.987336 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.987359 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.987385 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.987406 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:46Z","lastTransitionTime":"2026-03-20T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:47 crc kubenswrapper[5136]: E0320 06:51:47.003040 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.007202 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.007254 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.007272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.007295 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.007313 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:47Z","lastTransitionTime":"2026-03-20T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:47 crc kubenswrapper[5136]: E0320 06:51:47.027155 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.032111 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.032139 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.032148 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.032161 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.032171 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:47Z","lastTransitionTime":"2026-03-20T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:47 crc kubenswrapper[5136]: E0320 06:51:47.050572 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:47 crc kubenswrapper[5136]: E0320 06:51:47.050753 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.396189 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:47 crc kubenswrapper[5136]: E0320 06:51:47.396369 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.395752 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.395768 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:48 crc kubenswrapper[5136]: E0320 06:51:48.395982 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.396029 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:48 crc kubenswrapper[5136]: E0320 06:51:48.396120 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:48 crc kubenswrapper[5136]: E0320 06:51:48.396204 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.418527 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.433997 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.449738 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.465893 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.483682 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: E0320 06:51:48.495876 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.505252 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.519195 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.534443 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.547962 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.562089 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.577138 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.599374 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.609935 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.622101 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.633491 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.645566 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.656969 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:49 crc kubenswrapper[5136]: I0320 06:51:49.396206 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:49 crc kubenswrapper[5136]: E0320 06:51:49.396394 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:50 crc kubenswrapper[5136]: I0320 06:51:50.396279 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:50 crc kubenswrapper[5136]: I0320 06:51:50.396313 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:50 crc kubenswrapper[5136]: E0320 06:51:50.396473 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:50 crc kubenswrapper[5136]: I0320 06:51:50.396615 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:50 crc kubenswrapper[5136]: E0320 06:51:50.396720 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:50 crc kubenswrapper[5136]: E0320 06:51:50.396995 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:51 crc kubenswrapper[5136]: I0320 06:51:51.396342 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:51 crc kubenswrapper[5136]: E0320 06:51:51.396492 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:52 crc kubenswrapper[5136]: I0320 06:51:52.396035 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:52 crc kubenswrapper[5136]: I0320 06:51:52.396035 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:52 crc kubenswrapper[5136]: E0320 06:51:52.396208 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:52 crc kubenswrapper[5136]: E0320 06:51:52.396349 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:52 crc kubenswrapper[5136]: I0320 06:51:52.396040 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:52 crc kubenswrapper[5136]: E0320 06:51:52.396582 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.395594 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:53 crc kubenswrapper[5136]: E0320 06:51:53.395773 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.418577 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 20 06:51:53 crc kubenswrapper[5136]: E0320 06:51:53.497460 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.922223 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.942759 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.958658 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.977516 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.998705 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.011259 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.023038 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.042715 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.055911 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.068503 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.081699 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.094890 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.113463 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.130612 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.145995 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.158223 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.169712 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.179908 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.193014 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.396387 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.396396 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:54 crc kubenswrapper[5136]: E0320 06:51:54.396584 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:54 crc kubenswrapper[5136]: E0320 06:51:54.396752 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.396418 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:54 crc kubenswrapper[5136]: E0320 06:51:54.397101 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:55 crc kubenswrapper[5136]: I0320 06:51:55.395648 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:55 crc kubenswrapper[5136]: E0320 06:51:55.395780 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.366602 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.366710 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.366741 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.366763 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.366800 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366893 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:00.366839668 +0000 UTC m=+212.626150869 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366925 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366948 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366969 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366974 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367021 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:00.366995552 +0000 UTC m=+212.626306713 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366982 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367043 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:00.367033403 +0000 UTC m=+212.626344564 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367064 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:00.367053534 +0000 UTC m=+212.626364785 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367074 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367105 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367126 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367200 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:00.367180708 +0000 UTC m=+212.626491899 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.396664 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.396776 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.396898 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.396935 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.397055 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.397197 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.274191 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.274282 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.274294 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.274309 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.274320 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:57Z","lastTransitionTime":"2026-03-20T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.303301 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.308711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.308766 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.308785 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.308808 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.308851 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:57Z","lastTransitionTime":"2026-03-20T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.329496 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.334006 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.334059 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.334077 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.334098 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.334114 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:57Z","lastTransitionTime":"2026-03-20T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.351924 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.356584 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.356636 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.356654 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.356677 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.356694 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:57Z","lastTransitionTime":"2026-03-20T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.372630 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.377343 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.377400 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.377416 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.377439 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.377457 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:57Z","lastTransitionTime":"2026-03-20T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.396111 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.396336 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.397190 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.397407 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.396231 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.396231 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.396475 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:58 crc kubenswrapper[5136]: E0320 06:51:58.396403 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:58 crc kubenswrapper[5136]: E0320 06:51:58.396536 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:58 crc kubenswrapper[5136]: E0320 06:51:58.396625 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.411359 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.428708 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.446494 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.472370 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.488706 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: E0320 06:51:58.498157 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.505906 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.524222 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.536124 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.547309 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.559282 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.573293 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.584199 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.597713 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.610899 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.622284 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.631220 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.646176 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.656031 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:59 crc kubenswrapper[5136]: I0320 06:51:59.396369 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:59 crc kubenswrapper[5136]: E0320 06:51:59.396552 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:00 crc kubenswrapper[5136]: I0320 06:52:00.108540 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:00 crc kubenswrapper[5136]: E0320 06:52:00.108713 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:52:00 crc kubenswrapper[5136]: E0320 06:52:00.108948 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:52:32.10893283 +0000 UTC m=+184.368243981 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:52:00 crc kubenswrapper[5136]: I0320 06:52:00.396616 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:00 crc kubenswrapper[5136]: I0320 06:52:00.396701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:00 crc kubenswrapper[5136]: E0320 06:52:00.396806 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:00 crc kubenswrapper[5136]: I0320 06:52:00.397481 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:00 crc kubenswrapper[5136]: E0320 06:52:00.397616 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:00 crc kubenswrapper[5136]: E0320 06:52:00.398005 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:00 crc kubenswrapper[5136]: I0320 06:52:00.398174 5136 scope.go:117] "RemoveContainer" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.074614 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/1.log" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.078510 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00"} Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.078807 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.097154 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.116642 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.132763 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.148668 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.160748 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.177054 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.191510 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.202304 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.215197 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.231302 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.241858 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.250469 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.263964 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.274381 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.284362 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.297646 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.309149 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.333456 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.396106 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:01 crc kubenswrapper[5136]: E0320 06:52:01.396235 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.083910 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/2.log" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.084689 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/1.log" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.087002 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" exitCode=1 Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.087036 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00"} Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.087069 5136 scope.go:117] "RemoveContainer" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.088166 5136 scope.go:117] "RemoveContainer" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" Mar 20 06:52:02 crc kubenswrapper[5136]: E0320 06:52:02.088445 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.105359 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.117058 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.128595 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.143638 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.153894 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.164537 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.175364 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.186017 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.201772 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.214341 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.226291 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.235688 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.246206 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.257521 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.269622 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.280799 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.288720 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.304509 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.396321 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.396345 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.396369 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:02 crc kubenswrapper[5136]: E0320 06:52:02.396452 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:02 crc kubenswrapper[5136]: E0320 06:52:02.396601 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:02 crc kubenswrapper[5136]: E0320 06:52:02.396639 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.093986 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/2.log" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.099992 5136 scope.go:117] "RemoveContainer" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" Mar 20 06:52:03 crc kubenswrapper[5136]: E0320 06:52:03.100239 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.121119 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.134157 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.149327 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.165106 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.181134 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.199602 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.218406 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.230002 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.242697 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.257237 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.270555 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.281736 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.296841 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.311730 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.328237 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.345561 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.358270 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.377308 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.396526 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:03 crc kubenswrapper[5136]: E0320 06:52:03.396647 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:03 crc kubenswrapper[5136]: E0320 06:52:03.499777 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:04 crc kubenswrapper[5136]: I0320 06:52:04.395959 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:04 crc kubenswrapper[5136]: I0320 06:52:04.396076 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:04 crc kubenswrapper[5136]: E0320 06:52:04.396110 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:04 crc kubenswrapper[5136]: I0320 06:52:04.396183 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:04 crc kubenswrapper[5136]: E0320 06:52:04.396302 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:04 crc kubenswrapper[5136]: E0320 06:52:04.396437 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:05 crc kubenswrapper[5136]: I0320 06:52:05.396162 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:05 crc kubenswrapper[5136]: E0320 06:52:05.396329 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:05 crc kubenswrapper[5136]: I0320 06:52:05.416395 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 20 06:52:06 crc kubenswrapper[5136]: I0320 06:52:06.396299 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:06 crc kubenswrapper[5136]: I0320 06:52:06.396399 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:06 crc kubenswrapper[5136]: E0320 06:52:06.396484 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:06 crc kubenswrapper[5136]: I0320 06:52:06.396508 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:06 crc kubenswrapper[5136]: E0320 06:52:06.396686 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:06 crc kubenswrapper[5136]: E0320 06:52:06.396882 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.396437 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.396635 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.478262 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.478331 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.478349 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.478373 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.478390 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:07Z","lastTransitionTime":"2026-03-20T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.498906 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.505775 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.505844 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.505857 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.505875 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.505888 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:07Z","lastTransitionTime":"2026-03-20T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.522425 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.526372 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.526415 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.526432 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.526456 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.526471 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:07Z","lastTransitionTime":"2026-03-20T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.544554 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.548428 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.548486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.548509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.548538 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.548558 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:07Z","lastTransitionTime":"2026-03-20T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.566427 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.570700 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.570735 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.570747 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.570764 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.570776 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:07Z","lastTransitionTime":"2026-03-20T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.588020 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.588125 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.396383 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.396473 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.396473 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:08 crc kubenswrapper[5136]: E0320 06:52:08.396591 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:08 crc kubenswrapper[5136]: E0320 06:52:08.396888 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:08 crc kubenswrapper[5136]: E0320 06:52:08.397044 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.412146 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.427308 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.457590 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.474079 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.484896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: E0320 06:52:08.500393 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.501462 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.518130 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.532294 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.546615 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.558500 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.579069 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.594068 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.608035 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.619080 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.629209 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.641481 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.658464 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.671185 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.687894 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:09 crc kubenswrapper[5136]: I0320 06:52:09.396663 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:09 crc kubenswrapper[5136]: E0320 06:52:09.396847 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:10 crc kubenswrapper[5136]: I0320 06:52:10.396375 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:10 crc kubenswrapper[5136]: I0320 06:52:10.396418 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:10 crc kubenswrapper[5136]: E0320 06:52:10.396585 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:10 crc kubenswrapper[5136]: I0320 06:52:10.396677 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:10 crc kubenswrapper[5136]: E0320 06:52:10.396848 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:10 crc kubenswrapper[5136]: E0320 06:52:10.397034 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:11 crc kubenswrapper[5136]: I0320 06:52:11.396122 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:11 crc kubenswrapper[5136]: E0320 06:52:11.396267 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:12 crc kubenswrapper[5136]: I0320 06:52:12.396328 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:12 crc kubenswrapper[5136]: I0320 06:52:12.396358 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:12 crc kubenswrapper[5136]: E0320 06:52:12.397156 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:12 crc kubenswrapper[5136]: I0320 06:52:12.396422 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:12 crc kubenswrapper[5136]: E0320 06:52:12.397280 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:12 crc kubenswrapper[5136]: E0320 06:52:12.397518 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:13 crc kubenswrapper[5136]: I0320 06:52:13.396529 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:13 crc kubenswrapper[5136]: E0320 06:52:13.396795 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:13 crc kubenswrapper[5136]: E0320 06:52:13.501350 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:14 crc kubenswrapper[5136]: I0320 06:52:14.396469 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:14 crc kubenswrapper[5136]: I0320 06:52:14.396552 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:14 crc kubenswrapper[5136]: E0320 06:52:14.396679 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:14 crc kubenswrapper[5136]: I0320 06:52:14.396721 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:14 crc kubenswrapper[5136]: E0320 06:52:14.396930 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:14 crc kubenswrapper[5136]: E0320 06:52:14.397038 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.164880 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/0.log" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.164935 5136 generic.go:334] "Generic (PLEG): container finished" podID="263c5427-a835-40c6-93cb-4bb66a83ea5b" containerID="84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68" exitCode=1 Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.164967 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerDied","Data":"84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68"} Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.165336 5136 scope.go:117] "RemoveContainer" containerID="84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.181589 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.207475 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.221349 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.235691 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.248113 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.263139 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.277738 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.296954 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.309505 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.324742 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.335231 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.347305 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.378616 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.396603 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:15 crc kubenswrapper[5136]: E0320 06:52:15.396711 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.397755 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.411605 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.428175 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.442269 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.457763 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.471985 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.172435 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/0.log" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.172515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerStarted","Data":"1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644"} Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.192509 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.216136 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.233550 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.252473 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.269328 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.288488 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.315922 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.333478 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.350896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.366549 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.382418 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.396442 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.396511 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.396550 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:16 crc kubenswrapper[5136]: E0320 06:52:16.396643 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:16 crc kubenswrapper[5136]: E0320 06:52:16.396855 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:16 crc kubenswrapper[5136]: E0320 06:52:16.396883 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.399982 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.418129 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.437878 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.457174 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.472145 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.487432 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.517373 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.535033 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.396269 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.396682 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.396941 5136 scope.go:117] "RemoveContainer" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.397100 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.771917 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.772014 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.772032 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.772103 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.772145 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:17Z","lastTransitionTime":"2026-03-20T06:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.790796 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:17Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.795618 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.795989 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.796179 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.796344 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.796542 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:17Z","lastTransitionTime":"2026-03-20T06:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.814435 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:17Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.819673 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.819915 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.820078 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.820231 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.820373 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:17Z","lastTransitionTime":"2026-03-20T06:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.840229 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:17Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.844009 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.844047 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.844058 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.844078 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.844092 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:17Z","lastTransitionTime":"2026-03-20T06:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.858566 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:17Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.862207 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.862227 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.862237 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.862250 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.862261 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:17Z","lastTransitionTime":"2026-03-20T06:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.875719 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:17Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.875894 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.396575 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.396673 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:18 crc kubenswrapper[5136]: E0320 06:52:18.396806 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.396903 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:18 crc kubenswrapper[5136]: E0320 06:52:18.397071 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:18 crc kubenswrapper[5136]: E0320 06:52:18.397250 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.414103 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.434854 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.452433 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.472657 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.488966 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: E0320 06:52:18.502210 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.509889 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.527190 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.543002 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.564216 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.580971 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.595539 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.620498 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.635945 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.650643 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.664307 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.675582 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.696959 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.712170 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.724704 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:19 crc kubenswrapper[5136]: I0320 06:52:19.396130 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:19 crc kubenswrapper[5136]: E0320 06:52:19.396600 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:20 crc kubenswrapper[5136]: I0320 06:52:20.396609 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:20 crc kubenswrapper[5136]: I0320 06:52:20.396698 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:20 crc kubenswrapper[5136]: I0320 06:52:20.396609 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:20 crc kubenswrapper[5136]: E0320 06:52:20.396878 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:20 crc kubenswrapper[5136]: E0320 06:52:20.396995 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:20 crc kubenswrapper[5136]: E0320 06:52:20.397156 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:21 crc kubenswrapper[5136]: I0320 06:52:21.396448 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:21 crc kubenswrapper[5136]: E0320 06:52:21.396650 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:22 crc kubenswrapper[5136]: I0320 06:52:22.396614 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:22 crc kubenswrapper[5136]: I0320 06:52:22.396657 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:22 crc kubenswrapper[5136]: E0320 06:52:22.396717 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:22 crc kubenswrapper[5136]: I0320 06:52:22.396782 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:22 crc kubenswrapper[5136]: E0320 06:52:22.396993 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:22 crc kubenswrapper[5136]: E0320 06:52:22.397001 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:23 crc kubenswrapper[5136]: I0320 06:52:23.396130 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:23 crc kubenswrapper[5136]: E0320 06:52:23.396333 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:23 crc kubenswrapper[5136]: E0320 06:52:23.503794 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:24 crc kubenswrapper[5136]: I0320 06:52:24.396618 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:24 crc kubenswrapper[5136]: I0320 06:52:24.396749 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:24 crc kubenswrapper[5136]: E0320 06:52:24.396988 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:24 crc kubenswrapper[5136]: E0320 06:52:24.396765 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:24 crc kubenswrapper[5136]: I0320 06:52:24.397047 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:24 crc kubenswrapper[5136]: E0320 06:52:24.397278 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:25 crc kubenswrapper[5136]: I0320 06:52:25.396054 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:25 crc kubenswrapper[5136]: E0320 06:52:25.396230 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:26 crc kubenswrapper[5136]: I0320 06:52:26.395922 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:26 crc kubenswrapper[5136]: I0320 06:52:26.396012 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:26 crc kubenswrapper[5136]: I0320 06:52:26.395922 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:26 crc kubenswrapper[5136]: E0320 06:52:26.396099 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:26 crc kubenswrapper[5136]: E0320 06:52:26.396316 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:26 crc kubenswrapper[5136]: E0320 06:52:26.396369 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:27 crc kubenswrapper[5136]: I0320 06:52:27.396146 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:27 crc kubenswrapper[5136]: E0320 06:52:27.396355 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.107970 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.108023 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.108034 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.108050 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.108062 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:28Z","lastTransitionTime":"2026-03-20T06:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.128173 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.133004 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.133064 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.133082 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.133108 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.133125 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:28Z","lastTransitionTime":"2026-03-20T06:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.151696 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.156941 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.156984 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.157003 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.157026 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.157042 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:28Z","lastTransitionTime":"2026-03-20T06:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.179025 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.184128 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.184278 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.184301 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.184383 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.184409 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:28Z","lastTransitionTime":"2026-03-20T06:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.209026 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.213900 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.213973 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.213996 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.214024 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.214049 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:28Z","lastTransitionTime":"2026-03-20T06:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.235273 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.235446 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.395705 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.395757 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.395757 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.395951 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.396084 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.396199 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.419564 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.440464 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.455434 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.475292 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.493562 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.504511 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.509512 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.545435 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.565895 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.577774 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.590124 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.605133 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.618158 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.628861 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.643825 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.655602 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.667944 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.683055 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.697611 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.720795 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:29 crc kubenswrapper[5136]: I0320 06:52:29.395709 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:29 crc kubenswrapper[5136]: E0320 06:52:29.395957 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:30 crc kubenswrapper[5136]: I0320 06:52:30.395770 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:30 crc kubenswrapper[5136]: I0320 06:52:30.395889 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:30 crc kubenswrapper[5136]: E0320 06:52:30.396088 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:30 crc kubenswrapper[5136]: I0320 06:52:30.396160 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:30 crc kubenswrapper[5136]: E0320 06:52:30.396318 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:30 crc kubenswrapper[5136]: E0320 06:52:30.396382 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:31 crc kubenswrapper[5136]: I0320 06:52:31.395848 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:31 crc kubenswrapper[5136]: E0320 06:52:31.396007 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:31 crc kubenswrapper[5136]: I0320 06:52:31.397099 5136 scope.go:117] "RemoveContainer" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.161274 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:32 crc kubenswrapper[5136]: E0320 06:52:32.161500 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:52:32 crc kubenswrapper[5136]: E0320 06:52:32.161643 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:36.161626658 +0000 UTC m=+248.420937809 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.245397 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/2.log" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.247852 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.248202 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.260692 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.274961 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.284847 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.298101 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.308917 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.320421 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.339800 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.351913 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.362174 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.372317 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.382350 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.396215 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.396281 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:32 crc kubenswrapper[5136]: E0320 06:52:32.396316 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:32 crc kubenswrapper[5136]: E0320 06:52:32.396427 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.396495 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:32 crc kubenswrapper[5136]: E0320 06:52:32.396600 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.397204 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.406570 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.418587 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.426524 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.438323 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.449685 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.459840 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.479555 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.253507 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/3.log" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.254290 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/2.log" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.257141 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" exitCode=1 Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.257194 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.257234 5136 scope.go:117] "RemoveContainer" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.257885 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 06:52:33 crc kubenswrapper[5136]: E0320 06:52:33.258161 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.280690 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.293806 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.303306 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.315286 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.325883 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.338665 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.366789 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.380055 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.389679 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.396126 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:33 crc kubenswrapper[5136]: E0320 06:52:33.396320 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.401940 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.419144 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.433445 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.445597 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.460929 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.472088 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.483961 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.498010 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: E0320 06:52:33.506535 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.512037 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.533620 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:32Z\\\",\\\"message\\\":\\\"penshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0320 06:52:32.267553 7856 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0320 06:52:32.266351 7856 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.263558 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/3.log" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.269796 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 06:52:34 crc kubenswrapper[5136]: E0320 06:52:34.270128 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.296767 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:32Z\\\",\\\"message\\\":\\\"penshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0320 06:52:32.267553 7856 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0320 06:52:32.266351 7856 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.322794 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.354968 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.374246 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.386413 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.396255 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.396293 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.396301 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:34 crc kubenswrapper[5136]: E0320 06:52:34.396375 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:34 crc kubenswrapper[5136]: E0320 06:52:34.396470 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:34 crc kubenswrapper[5136]: E0320 06:52:34.396549 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.397481 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.410376 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.422939 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.441325 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.455239 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.466726 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.485531 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.502269 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.520344 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.541637 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.556147 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.572733 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.590192 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.602323 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:35 crc kubenswrapper[5136]: I0320 06:52:35.396516 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:35 crc kubenswrapper[5136]: E0320 06:52:35.396655 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:36 crc kubenswrapper[5136]: I0320 06:52:36.396062 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:36 crc kubenswrapper[5136]: I0320 06:52:36.396153 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:36 crc kubenswrapper[5136]: I0320 06:52:36.396153 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:36 crc kubenswrapper[5136]: E0320 06:52:36.396296 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:36 crc kubenswrapper[5136]: E0320 06:52:36.396383 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:36 crc kubenswrapper[5136]: E0320 06:52:36.396587 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:37 crc kubenswrapper[5136]: I0320 06:52:37.396287 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:37 crc kubenswrapper[5136]: E0320 06:52:37.396491 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.278606 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.278669 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.278693 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.278720 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.278741 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:38Z","lastTransitionTime":"2026-03-20T06:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.298354 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.303513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.303568 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.303586 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.303610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.303627 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:38Z","lastTransitionTime":"2026-03-20T06:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.322565 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.326760 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.326849 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.326868 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.326890 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.326907 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:38Z","lastTransitionTime":"2026-03-20T06:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.343125 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.347493 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.347708 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.347916 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.348134 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.348343 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:38Z","lastTransitionTime":"2026-03-20T06:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.366793 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.370976 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.371018 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.371030 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.371049 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.371065 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:38Z","lastTransitionTime":"2026-03-20T06:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.389566 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.389784 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.395711 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.395900 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.396050 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.396212 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.396395 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.397212 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.416209 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.437979 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.454805 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.472299 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.490173 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.507246 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.512577 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.548582 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.568431 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.584263 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.599142 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.613774 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.634534 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.647991 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.669482 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.685702 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.696515 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.708601 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.718727 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.747557 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:32Z\\\",\\\"message\\\":\\\"penshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0320 06:52:32.267553 7856 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0320 06:52:32.266351 7856 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:39 crc kubenswrapper[5136]: I0320 06:52:39.396284 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:39 crc kubenswrapper[5136]: E0320 06:52:39.396747 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:40 crc kubenswrapper[5136]: I0320 06:52:40.395978 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:40 crc kubenswrapper[5136]: I0320 06:52:40.396036 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:40 crc kubenswrapper[5136]: E0320 06:52:40.396111 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:40 crc kubenswrapper[5136]: E0320 06:52:40.396212 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:40 crc kubenswrapper[5136]: I0320 06:52:40.395924 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:40 crc kubenswrapper[5136]: E0320 06:52:40.396456 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:41 crc kubenswrapper[5136]: I0320 06:52:41.395899 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:41 crc kubenswrapper[5136]: E0320 06:52:41.396216 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:42 crc kubenswrapper[5136]: I0320 06:52:42.395714 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:42 crc kubenswrapper[5136]: E0320 06:52:42.395934 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:42 crc kubenswrapper[5136]: I0320 06:52:42.396300 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:42 crc kubenswrapper[5136]: E0320 06:52:42.396424 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:42 crc kubenswrapper[5136]: I0320 06:52:42.397527 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:42 crc kubenswrapper[5136]: E0320 06:52:42.397670 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:43 crc kubenswrapper[5136]: I0320 06:52:43.395773 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:43 crc kubenswrapper[5136]: E0320 06:52:43.396006 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:43 crc kubenswrapper[5136]: E0320 06:52:43.509006 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:44 crc kubenswrapper[5136]: I0320 06:52:44.396179 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:44 crc kubenswrapper[5136]: I0320 06:52:44.396260 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:44 crc kubenswrapper[5136]: I0320 06:52:44.396192 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:44 crc kubenswrapper[5136]: E0320 06:52:44.396375 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:44 crc kubenswrapper[5136]: E0320 06:52:44.396479 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:44 crc kubenswrapper[5136]: E0320 06:52:44.396528 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:45 crc kubenswrapper[5136]: I0320 06:52:45.395994 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:45 crc kubenswrapper[5136]: E0320 06:52:45.396175 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:45 crc kubenswrapper[5136]: I0320 06:52:45.397151 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 06:52:45 crc kubenswrapper[5136]: E0320 06:52:45.397394 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:46 crc kubenswrapper[5136]: I0320 06:52:46.396280 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:46 crc kubenswrapper[5136]: I0320 06:52:46.396321 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:46 crc kubenswrapper[5136]: I0320 06:52:46.396399 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:46 crc kubenswrapper[5136]: E0320 06:52:46.396521 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:46 crc kubenswrapper[5136]: E0320 06:52:46.396669 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:46 crc kubenswrapper[5136]: E0320 06:52:46.396692 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:47 crc kubenswrapper[5136]: I0320 06:52:47.395705 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:47 crc kubenswrapper[5136]: E0320 06:52:47.395959 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.395724 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.395920 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.396012 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.396108 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.396411 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.396566 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.411150 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.425485 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.434299 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.447732 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.459619 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.470975 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.483721 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.495615 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.509504 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.512927 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.512969 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.512985 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.513007 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.513024 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:48Z","lastTransitionTime":"2026-03-20T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.516390 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:32Z\\\",\\\"message\\\":\\\"penshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0320 06:52:32.267553 7856 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0320 06:52:32.266351 7856 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.529228 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.530591 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.533175 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.533228 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.533239 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.533258 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.533271 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:48Z","lastTransitionTime":"2026-03-20T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.546752 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.546964 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.550967 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.551007 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.551016 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.551032 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.551044 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:48Z","lastTransitionTime":"2026-03-20T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.558708 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.564471 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.569062 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.569102 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.569111 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.569126 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.569136 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:48Z","lastTransitionTime":"2026-03-20T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.570451 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.582942 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.584358 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.587506 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.587540 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.587549 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.587563 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.587574 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:48Z","lastTransitionTime":"2026-03-20T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.595798 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.601107 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.601286 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.619929 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.637615 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.647335 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.659564 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:49 crc kubenswrapper[5136]: I0320 06:52:49.396522 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:49 crc kubenswrapper[5136]: E0320 06:52:49.396713 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:50 crc kubenswrapper[5136]: I0320 06:52:50.396688 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:50 crc kubenswrapper[5136]: I0320 06:52:50.397337 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:50 crc kubenswrapper[5136]: E0320 06:52:50.397416 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:50 crc kubenswrapper[5136]: I0320 06:52:50.397552 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:50 crc kubenswrapper[5136]: E0320 06:52:50.397732 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:50 crc kubenswrapper[5136]: E0320 06:52:50.397756 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:51 crc kubenswrapper[5136]: I0320 06:52:51.395568 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:51 crc kubenswrapper[5136]: E0320 06:52:51.395741 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:52 crc kubenswrapper[5136]: I0320 06:52:52.396556 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:52 crc kubenswrapper[5136]: I0320 06:52:52.396603 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:52 crc kubenswrapper[5136]: E0320 06:52:52.396786 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:52 crc kubenswrapper[5136]: I0320 06:52:52.396807 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:52 crc kubenswrapper[5136]: E0320 06:52:52.396994 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:52 crc kubenswrapper[5136]: E0320 06:52:52.397096 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:53 crc kubenswrapper[5136]: I0320 06:52:53.396221 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:53 crc kubenswrapper[5136]: E0320 06:52:53.396419 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:53 crc kubenswrapper[5136]: E0320 06:52:53.511396 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:54 crc kubenswrapper[5136]: I0320 06:52:54.396092 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:54 crc kubenswrapper[5136]: I0320 06:52:54.396256 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:54 crc kubenswrapper[5136]: I0320 06:52:54.396467 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:54 crc kubenswrapper[5136]: E0320 06:52:54.396721 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:54 crc kubenswrapper[5136]: E0320 06:52:54.396953 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:54 crc kubenswrapper[5136]: E0320 06:52:54.397158 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:55 crc kubenswrapper[5136]: I0320 06:52:55.396047 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:55 crc kubenswrapper[5136]: E0320 06:52:55.396200 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:56 crc kubenswrapper[5136]: I0320 06:52:56.396699 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:56 crc kubenswrapper[5136]: E0320 06:52:56.396854 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:56 crc kubenswrapper[5136]: I0320 06:52:56.396925 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:56 crc kubenswrapper[5136]: I0320 06:52:56.396949 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:56 crc kubenswrapper[5136]: E0320 06:52:56.397165 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:56 crc kubenswrapper[5136]: E0320 06:52:56.397690 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:57 crc kubenswrapper[5136]: I0320 06:52:57.396121 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:57 crc kubenswrapper[5136]: E0320 06:52:57.396312 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.395772 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.395791 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.395955 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:58 crc kubenswrapper[5136]: E0320 06:52:58.396083 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:58 crc kubenswrapper[5136]: E0320 06:52:58.396226 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:58 crc kubenswrapper[5136]: E0320 06:52:58.396414 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.412549 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.435810 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.448412 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.474122 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:32Z\\\",\\\"message\\\":\\\"penshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0320 06:52:32.267553 7856 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0320 06:52:32.266351 7856 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.489091 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.509690 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: E0320 06:52:58.513067 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.525598 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.540849 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.580079 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podStartSLOduration=141.580059686 podStartE2EDuration="2m21.580059686s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.563907547 +0000 UTC m=+210.823218698" watchObservedRunningTime="2026-03-20 06:52:58.580059686 +0000 UTC m=+210.839370837" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.603908 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tjpps" podStartSLOduration=141.603892856 podStartE2EDuration="2m21.603892856s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.58050291 +0000 UTC m=+210.839814061" watchObservedRunningTime="2026-03-20 06:52:58.603892856 +0000 UTC m=+210.863203997" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.604020 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=53.60401636 podStartE2EDuration="53.60401636s" podCreationTimestamp="2026-03-20 06:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.602688739 +0000 UTC m=+210.861999890" watchObservedRunningTime="2026-03-20 06:52:58.60401636 +0000 UTC m=+210.863327511" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.637457 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-g5hkc" podStartSLOduration=142.637443214 podStartE2EDuration="2m22.637443214s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.625574921 +0000 UTC m=+210.884886072" watchObservedRunningTime="2026-03-20 06:52:58.637443214 +0000 UTC m=+210.896754365" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.650174 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" podStartSLOduration=141.650156755 podStartE2EDuration="2m21.650156755s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.637735123 +0000 UTC m=+210.897046274" watchObservedRunningTime="2026-03-20 06:52:58.650156755 +0000 UTC m=+210.909467906" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.697746 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pt4jb" podStartSLOduration=142.697729164 podStartE2EDuration="2m22.697729164s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.674676968 +0000 UTC m=+210.933988119" watchObservedRunningTime="2026-03-20 06:52:58.697729164 +0000 UTC m=+210.957040315" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.708404 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" podStartSLOduration=141.70838945 podStartE2EDuration="2m21.70838945s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.698160318 +0000 UTC m=+210.957471499" watchObservedRunningTime="2026-03-20 06:52:58.70838945 +0000 UTC m=+210.967700601" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.990810 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.990895 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.990913 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.990937 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.990957 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:58Z","lastTransitionTime":"2026-03-20T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.055876 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f"] Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.056762 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.059943 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.059961 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.060194 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.060509 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.077226 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=95.077203574 podStartE2EDuration="1m35.077203574s" podCreationTimestamp="2026-03-20 06:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:59.076248624 +0000 UTC m=+211.335559795" watchObservedRunningTime="2026-03-20 06:52:59.077203574 +0000 UTC m=+211.336514765" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.140744 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.140934 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac47cba1-1678-408b-9a4d-21d4f3e964ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.140990 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac47cba1-1678-408b-9a4d-21d4f3e964ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.141082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac47cba1-1678-408b-9a4d-21d4f3e964ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.141314 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.177505 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.177485194 podStartE2EDuration="1m6.177485194s" podCreationTimestamp="2026-03-20 06:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:59.176122422 +0000 UTC m=+211.435433593" watchObservedRunningTime="2026-03-20 06:52:59.177485194 +0000 UTC m=+211.436796365" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.192900 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=124.19287727 podStartE2EDuration="2m4.19287727s" podCreationTimestamp="2026-03-20 06:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:59.192597221 +0000 UTC m=+211.451908412" watchObservedRunningTime="2026-03-20 06:52:59.19287727 +0000 UTC m=+211.452188431" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.205050 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=123.205028873 podStartE2EDuration="2m3.205028873s" podCreationTimestamp="2026-03-20 06:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:59.204377222 +0000 UTC m=+211.463688393" watchObservedRunningTime="2026-03-20 06:52:59.205028873 +0000 UTC m=+211.464340034" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242333 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242649 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac47cba1-1678-408b-9a4d-21d4f3e964ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242445 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242806 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac47cba1-1678-408b-9a4d-21d4f3e964ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242949 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac47cba1-1678-408b-9a4d-21d4f3e964ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242991 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.243148 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.244977 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac47cba1-1678-408b-9a4d-21d4f3e964ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.252035 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac47cba1-1678-408b-9a4d-21d4f3e964ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.264547 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac47cba1-1678-408b-9a4d-21d4f3e964ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.376383 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.397637 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:59 crc kubenswrapper[5136]: E0320 06:52:59.398781 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.399092 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 06:52:59 crc kubenswrapper[5136]: E0320 06:52:59.399264 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:59 crc kubenswrapper[5136]: W0320 06:52:59.404170 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac47cba1_1678_408b_9a4d_21d4f3e964ed.slice/crio-541251c099eb0f8dfc80c6aec07ffaba9fdf728949a9a1d9c04949f3d0e4a3e9 WatchSource:0}: Error finding container 541251c099eb0f8dfc80c6aec07ffaba9fdf728949a9a1d9c04949f3d0e4a3e9: Status 404 returned error can't find the container with id 541251c099eb0f8dfc80c6aec07ffaba9fdf728949a9a1d9c04949f3d0e4a3e9 Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.467495 5136 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.475920 5136 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.353959 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" event={"ID":"ac47cba1-1678-408b-9a4d-21d4f3e964ed","Type":"ContainerStarted","Data":"e63724aa394bed1cccc97a353a6e9c1076266155e6b9f60bfdfbbf1a33f9cde4"} Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.354044 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" event={"ID":"ac47cba1-1678-408b-9a4d-21d4f3e964ed","Type":"ContainerStarted","Data":"541251c099eb0f8dfc80c6aec07ffaba9fdf728949a9a1d9c04949f3d0e4a3e9"} Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.374494 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" podStartSLOduration=143.374466931 podStartE2EDuration="2m23.374466931s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:00.373694286 +0000 UTC m=+212.633005477" watchObservedRunningTime="2026-03-20 06:53:00.374466931 +0000 UTC m=+212.633778122" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.396296 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.396493 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.396320 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.396610 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.396311 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.396696 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.455439 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.455679 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:55:02.455641389 +0000 UTC m=+334.714952570 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.456649 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.456853 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.457038 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.456954 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457334 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:55:02.457313901 +0000 UTC m=+334.716625052 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457082 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457376 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457390 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457433 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:55:02.457426355 +0000 UTC m=+334.716737506 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457169 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457502 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457535 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457644 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:55:02.457616411 +0000 UTC m=+334.716927602 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.458065 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.458151 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.458318 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:55:02.458307753 +0000 UTC m=+334.717619124 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.359523 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/1.log" Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.360262 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/0.log" Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.360336 5136 generic.go:334] "Generic (PLEG): container finished" podID="263c5427-a835-40c6-93cb-4bb66a83ea5b" containerID="1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644" exitCode=1 Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.360378 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerDied","Data":"1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644"} Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.360422 5136 scope.go:117] "RemoveContainer" containerID="84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68" Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.361047 5136 scope.go:117] "RemoveContainer" containerID="1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644" Mar 20 06:53:01 crc kubenswrapper[5136]: E0320 06:53:01.361311 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-tjpps_openshift-multus(263c5427-a835-40c6-93cb-4bb66a83ea5b)\"" pod="openshift-multus/multus-tjpps" podUID="263c5427-a835-40c6-93cb-4bb66a83ea5b" Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.395707 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:01 crc kubenswrapper[5136]: E0320 06:53:01.395861 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:02 crc kubenswrapper[5136]: I0320 06:53:02.365506 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/1.log" Mar 20 06:53:02 crc kubenswrapper[5136]: I0320 06:53:02.395658 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:02 crc kubenswrapper[5136]: I0320 06:53:02.395662 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:02 crc kubenswrapper[5136]: E0320 06:53:02.395901 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:02 crc kubenswrapper[5136]: I0320 06:53:02.395923 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:02 crc kubenswrapper[5136]: E0320 06:53:02.396067 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:02 crc kubenswrapper[5136]: E0320 06:53:02.396312 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:03 crc kubenswrapper[5136]: I0320 06:53:03.396495 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:03 crc kubenswrapper[5136]: E0320 06:53:03.396736 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:03 crc kubenswrapper[5136]: E0320 06:53:03.514388 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:53:04 crc kubenswrapper[5136]: I0320 06:53:04.395997 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:04 crc kubenswrapper[5136]: I0320 06:53:04.396068 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:04 crc kubenswrapper[5136]: E0320 06:53:04.396125 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:04 crc kubenswrapper[5136]: E0320 06:53:04.396221 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:04 crc kubenswrapper[5136]: I0320 06:53:04.396293 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:04 crc kubenswrapper[5136]: E0320 06:53:04.396409 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:05 crc kubenswrapper[5136]: I0320 06:53:05.396441 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:05 crc kubenswrapper[5136]: E0320 06:53:05.396624 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:06 crc kubenswrapper[5136]: I0320 06:53:06.396020 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:06 crc kubenswrapper[5136]: I0320 06:53:06.396064 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:06 crc kubenswrapper[5136]: E0320 06:53:06.396730 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:06 crc kubenswrapper[5136]: I0320 06:53:06.396095 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:06 crc kubenswrapper[5136]: E0320 06:53:06.396834 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:06 crc kubenswrapper[5136]: E0320 06:53:06.396954 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:07 crc kubenswrapper[5136]: I0320 06:53:07.395905 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:07 crc kubenswrapper[5136]: E0320 06:53:07.396077 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:08 crc kubenswrapper[5136]: I0320 06:53:08.396046 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:08 crc kubenswrapper[5136]: I0320 06:53:08.396155 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:08 crc kubenswrapper[5136]: E0320 06:53:08.396859 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:08 crc kubenswrapper[5136]: I0320 06:53:08.396909 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:08 crc kubenswrapper[5136]: E0320 06:53:08.396919 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:08 crc kubenswrapper[5136]: E0320 06:53:08.397064 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:08 crc kubenswrapper[5136]: E0320 06:53:08.514966 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:53:09 crc kubenswrapper[5136]: I0320 06:53:09.396469 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:09 crc kubenswrapper[5136]: E0320 06:53:09.396663 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:10 crc kubenswrapper[5136]: I0320 06:53:10.395722 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:10 crc kubenswrapper[5136]: I0320 06:53:10.395782 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:10 crc kubenswrapper[5136]: I0320 06:53:10.395722 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:10 crc kubenswrapper[5136]: E0320 06:53:10.396008 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:10 crc kubenswrapper[5136]: E0320 06:53:10.396168 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:10 crc kubenswrapper[5136]: E0320 06:53:10.396692 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:11 crc kubenswrapper[5136]: I0320 06:53:11.395871 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:11 crc kubenswrapper[5136]: E0320 06:53:11.396062 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:12 crc kubenswrapper[5136]: I0320 06:53:12.396077 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:12 crc kubenswrapper[5136]: E0320 06:53:12.396293 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:12 crc kubenswrapper[5136]: I0320 06:53:12.396113 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:12 crc kubenswrapper[5136]: I0320 06:53:12.396109 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:12 crc kubenswrapper[5136]: E0320 06:53:12.396451 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:12 crc kubenswrapper[5136]: E0320 06:53:12.397209 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:12 crc kubenswrapper[5136]: I0320 06:53:12.397517 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.306331 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jz6hg"] Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.306460 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:13 crc kubenswrapper[5136]: E0320 06:53:13.306603 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.413763 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/3.log" Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.417909 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.418352 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.447450 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podStartSLOduration=156.447433326 podStartE2EDuration="2m36.447433326s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:13.446265349 +0000 UTC m=+225.705576510" watchObservedRunningTime="2026-03-20 06:53:13.447433326 +0000 UTC m=+225.706744477" Mar 20 06:53:13 crc kubenswrapper[5136]: E0320 06:53:13.516051 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:53:14 crc kubenswrapper[5136]: I0320 06:53:14.396342 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:14 crc kubenswrapper[5136]: E0320 06:53:14.396474 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:14 crc kubenswrapper[5136]: I0320 06:53:14.396342 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:14 crc kubenswrapper[5136]: I0320 06:53:14.396585 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:14 crc kubenswrapper[5136]: E0320 06:53:14.396664 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:14 crc kubenswrapper[5136]: E0320 06:53:14.396773 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:15 crc kubenswrapper[5136]: I0320 06:53:15.396248 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:15 crc kubenswrapper[5136]: E0320 06:53:15.396437 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:15 crc kubenswrapper[5136]: I0320 06:53:15.397156 5136 scope.go:117] "RemoveContainer" containerID="1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644" Mar 20 06:53:16 crc kubenswrapper[5136]: I0320 06:53:16.396607 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:16 crc kubenswrapper[5136]: I0320 06:53:16.396701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:16 crc kubenswrapper[5136]: E0320 06:53:16.396876 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:16 crc kubenswrapper[5136]: I0320 06:53:16.397037 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:16 crc kubenswrapper[5136]: E0320 06:53:16.397179 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:16 crc kubenswrapper[5136]: E0320 06:53:16.397312 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:16 crc kubenswrapper[5136]: I0320 06:53:16.431054 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/1.log" Mar 20 06:53:16 crc kubenswrapper[5136]: I0320 06:53:16.431143 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerStarted","Data":"758a96f3880280b2cc4897f196524e1c9a081a903d0afe658e33991167460924"} Mar 20 06:53:17 crc kubenswrapper[5136]: I0320 06:53:17.395698 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:17 crc kubenswrapper[5136]: E0320 06:53:17.396003 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:18 crc kubenswrapper[5136]: I0320 06:53:18.396415 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:18 crc kubenswrapper[5136]: I0320 06:53:18.396523 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:18 crc kubenswrapper[5136]: E0320 06:53:18.398532 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:18 crc kubenswrapper[5136]: I0320 06:53:18.398565 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:18 crc kubenswrapper[5136]: E0320 06:53:18.398696 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:18 crc kubenswrapper[5136]: E0320 06:53:18.398805 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.368414 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.416083 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.416720 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-274sn"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.417080 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.417560 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.417908 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.418291 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.419116 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.419396 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rmdpp"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.419967 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.422042 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.424719 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.425043 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.425927 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.426043 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.426197 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gjxt"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.426742 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.428460 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.429104 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.431553 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.431956 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.432278 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.432488 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.432886 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.434895 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87cfr"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.435381 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.435534 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.435721 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.436401 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.436490 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.437023 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.443401 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.443658 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.443975 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.444172 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.444439 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.446637 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-djxmj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.447184 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.447640 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.448178 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.448196 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vbv27"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.448769 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.451618 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.452005 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vbjpm"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.452715 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.453512 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.454104 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.463635 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.469561 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.472296 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.486827 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.487170 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491088 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491220 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491337 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491425 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491703 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491875 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491935 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.492028 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.492179 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.492254 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.492313 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.492353 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.493311 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-x4wkf"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.493596 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.498309 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.498646 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.498859 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.499064 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.499267 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.499589 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.499806 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.500071 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.500278 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.500510 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.500634 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.500863 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.501028 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.501433 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.501644 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.502248 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.502455 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.502769 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503011 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503186 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503296 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503308 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503355 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503431 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503532 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503424 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503634 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503724 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503809 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503922 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.504009 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.504163 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.504184 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.504190 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.504277 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.505140 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.505343 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.505869 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.506511 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.506840 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.507211 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.507221 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.508093 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.508338 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.508339 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.507258 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.511175 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.511652 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.516419 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.516782 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.516972 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.517000 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.517327 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.517752 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.517935 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.518441 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.518573 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.518603 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519285 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519296 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519384 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519452 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519521 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519778 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519872 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519948 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.520374 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.520395 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.520501 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.520543 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.520658 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.523206 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.523669 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.535508 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.537180 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.542258 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.542681 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.567153 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.568169 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.569292 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.569756 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.570037 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.570207 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.570365 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.570486 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.572119 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.573415 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.573737 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.575908 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.576184 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.577678 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.577980 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.578011 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.578045 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.582893 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.583541 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-glmlt"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.584403 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jvq8j"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.584599 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.584967 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.585256 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.585491 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.585932 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.585958 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586197 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586295 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586364 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586296 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586741 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586923 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587270 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwfcd\" (UniqueName: \"kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587301 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587321 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de5bcbec-966a-4934-b21a-a459ab3eb7bc-serving-cert\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587342 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgmj\" (UniqueName: \"kubernetes.io/projected/5f83cf2a-8b13-4536-bda7-b21bea494966-kube-api-access-7kgmj\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587362 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587383 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587438 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587463 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587491 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587859 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-service-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587911 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbj4h\" (UniqueName: \"kubernetes.io/projected/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-kube-api-access-gbj4h\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587928 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-config\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587950 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4vhj\" (UniqueName: \"kubernetes.io/projected/de5bcbec-966a-4934-b21a-a459ab3eb7bc-kube-api-access-h4vhj\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587968 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl98m\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588000 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588015 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588032 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr7qj\" (UniqueName: \"kubernetes.io/projected/5491b0c6-578a-430a-82db-943e9c7778e5-kube-api-access-dr7qj\") pod \"downloads-7954f5f757-djxmj\" (UID: \"5491b0c6-578a-430a-82db-943e9c7778e5\") " pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588049 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-client\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588065 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-service-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588085 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588104 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwk4x\" (UniqueName: \"kubernetes.io/projected/62c9b093-fe6a-4484-844b-31bbb4f6b21a-kube-api-access-zwk4x\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588137 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588152 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588167 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588185 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588200 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/de5bcbec-966a-4934-b21a-a459ab3eb7bc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588216 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltwh5\" (UniqueName: \"kubernetes.io/projected/2261aa95-8cc5-4fe7-9515-a065c381aa5b-kube-api-access-ltwh5\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588230 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-images\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588246 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82z9x\" (UniqueName: \"kubernetes.io/projected/a3ca072d-707e-4c94-9b3a-81eabc72f840-kube-api-access-82z9x\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588260 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588274 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588291 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-config\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588305 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588319 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ab617f-fa16-4ff5-ad90-328e952d31fb-serving-cert\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588332 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-trusted-ca\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588400 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588417 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rgk\" (UniqueName: \"kubernetes.io/projected/f0ab617f-fa16-4ff5-ad90-328e952d31fb-kube-api-access-k5rgk\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588440 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mtj5k"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588503 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588535 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588553 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2261aa95-8cc5-4fe7-9515-a065c381aa5b-serving-cert\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588569 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbz86\" (UniqueName: \"kubernetes.io/projected/e358e5eb-5d33-4510-a9fd-4dff0323f61a-kube-api-access-cbz86\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: E0320 06:53:19.588595 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.088582879 +0000 UTC m=+232.347894030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588612 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588630 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3ca072d-707e-4c94-9b3a-81eabc72f840-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588648 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588664 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e358e5eb-5d33-4510-a9fd-4dff0323f61a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588683 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-config\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588696 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62c9b093-fe6a-4484-844b-31bbb4f6b21a-metrics-tls\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588713 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-serving-cert\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588726 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588755 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcvzw\" (UniqueName: \"kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588770 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f83cf2a-8b13-4536-bda7-b21bea494966-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588804 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588857 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588878 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588890 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588895 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-config\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588910 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566492-9gbqz"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589019 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589437 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588911 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589559 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f83cf2a-8b13-4536-bda7-b21bea494966-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589583 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e49af127-1dfc-4213-b763-a4283104f38f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589605 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e358e5eb-5d33-4510-a9fd-4dff0323f61a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589624 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589652 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589669 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmrhp\" (UniqueName: \"kubernetes.io/projected/e49af127-1dfc-4213-b763-a4283104f38f-kube-api-access-cmrhp\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589690 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589703 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589708 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbcl\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-kube-api-access-jrbcl\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589972 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mmm42"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.590547 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.594459 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-274sn"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.594501 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rmdpp"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.594587 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gjxt"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.594617 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.594630 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzwlk"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.595419 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.595433 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.595443 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.595515 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.595739 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.598653 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87cfr"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.598996 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.600669 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vbv27"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.602459 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.609790 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.615329 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.618353 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.619181 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.626433 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.626503 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.626733 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.629186 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.630667 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.632529 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.633881 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-djxmj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.635744 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.637883 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.639408 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.640353 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.643125 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.644848 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mmm42"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.646485 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wnlnd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.647321 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.648633 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.650096 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jvq8j"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.651943 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.652881 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vwn87"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.653976 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.654367 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-glmlt"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.655725 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mtj5k"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.657388 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vbjpm"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.658870 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.660011 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.661759 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.662750 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wnlnd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.664318 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vwn87"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.665403 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566492-9gbqz"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.666629 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzwlk"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.668212 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.669086 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8mwfm"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.669790 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.679032 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690444 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690563 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/de5bcbec-966a-4934-b21a-a459ab3eb7bc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690591 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-images\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690609 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82z9x\" (UniqueName: \"kubernetes.io/projected/a3ca072d-707e-4c94-9b3a-81eabc72f840-kube-api-access-82z9x\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690628 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690643 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-serving-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690664 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-serving-cert\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690678 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22cf75b6-1525-436a-9999-96f3b2393a03-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690696 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ab617f-fa16-4ff5-ad90-328e952d31fb-serving-cert\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690712 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-trusted-ca\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690730 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-machine-approver-tls\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690746 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/882e7562-0811-4a27-9e79-cae539acc27d-audit-dir\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690764 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690780 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690796 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af7427ab-0805-477b-b064-f4258cef3ace-config\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690825 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22cf75b6-1525-436a-9999-96f3b2393a03-proxy-tls\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2261aa95-8cc5-4fe7-9515-a065c381aa5b-serving-cert\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690858 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbz86\" (UniqueName: \"kubernetes.io/projected/e358e5eb-5d33-4510-a9fd-4dff0323f61a-kube-api-access-cbz86\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690876 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690901 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-srv-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690917 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mdvw\" (UniqueName: \"kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690932 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3ca072d-707e-4c94-9b3a-81eabc72f840-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690953 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62c9b093-fe6a-4484-844b-31bbb4f6b21a-metrics-tls\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690970 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6490da1-20d4-4a12-bf24-50e24f3217dc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691003 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcvzw\" (UniqueName: \"kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691019 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-mountpoint-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691035 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f83cf2a-8b13-4536-bda7-b21bea494966-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691068 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-config\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691085 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e49af127-1dfc-4213-b763-a4283104f38f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691101 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gflmz\" (UniqueName: \"kubernetes.io/projected/a437188c-af0a-415d-9b0e-9e5b66f41ea3-kube-api-access-gflmz\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691120 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e358e5eb-5d33-4510-a9fd-4dff0323f61a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691134 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691150 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjlb\" (UniqueName: \"kubernetes.io/projected/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-kube-api-access-xxjlb\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691172 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691189 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmrhp\" (UniqueName: \"kubernetes.io/projected/e49af127-1dfc-4213-b763-a4283104f38f-kube-api-access-cmrhp\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691207 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691592 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krs5\" (UniqueName: \"kubernetes.io/projected/22cf75b6-1525-436a-9999-96f3b2393a03-kube-api-access-4krs5\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691631 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwfcd\" (UniqueName: \"kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691647 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6490da1-20d4-4a12-bf24-50e24f3217dc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691661 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6490da1-20d4-4a12-bf24-50e24f3217dc-config\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691677 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691695 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691710 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691727 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691745 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691764 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-plugins-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691790 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4vhj\" (UniqueName: \"kubernetes.io/projected/de5bcbec-966a-4934-b21a-a459ab3eb7bc-kube-api-access-h4vhj\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691806 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl98m\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691848 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-node-pullsecrets\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691862 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-registration-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691880 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691897 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr7qj\" (UniqueName: \"kubernetes.io/projected/5491b0c6-578a-430a-82db-943e9c7778e5-kube-api-access-dr7qj\") pod \"downloads-7954f5f757-djxmj\" (UID: \"5491b0c6-578a-430a-82db-943e9c7778e5\") " pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691923 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691938 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a437188c-af0a-415d-9b0e-9e5b66f41ea3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691954 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-csi-data-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692002 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-audit-policies\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltwh5\" (UniqueName: \"kubernetes.io/projected/2261aa95-8cc5-4fe7-9515-a065c381aa5b-kube-api-access-ltwh5\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692035 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-encryption-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692067 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692083 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692099 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692115 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-config\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692130 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692147 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692163 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rgk\" (UniqueName: \"kubernetes.io/projected/f0ab617f-fa16-4ff5-ad90-328e952d31fb-kube-api-access-k5rgk\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692179 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692333 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-config\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692363 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-auth-proxy-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692450 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntblj\" (UniqueName: \"kubernetes.io/projected/c87c53d2-e35b-43e3-910e-852b635c46b8-kube-api-access-ntblj\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692482 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-config\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692492 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692503 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e358e5eb-5d33-4510-a9fd-4dff0323f61a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692553 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-serving-cert\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692599 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692614 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-audit\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692639 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a437188c-af0a-415d-9b0e-9e5b66f41ea3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692668 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f83cf2a-8b13-4536-bda7-b21bea494966-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692693 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692713 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692735 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692753 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692771 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbx4\" (UniqueName: \"kubernetes.io/projected/ebaac2a5-0001-4d47-9d55-8ff138364356-kube-api-access-qhbx4\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692827 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692859 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrbcl\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-kube-api-access-jrbcl\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692876 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4jmx\" (UniqueName: \"kubernetes.io/projected/11250cf1-2849-42f6-8a9c-85d673b4b097-kube-api-access-j4jmx\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692904 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-audit-dir\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692923 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de5bcbec-966a-4934-b21a-a459ab3eb7bc-serving-cert\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692939 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgmj\" (UniqueName: \"kubernetes.io/projected/5f83cf2a-8b13-4536-bda7-b21bea494966-kube-api-access-7kgmj\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692956 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692973 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693028 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693027 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693046 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-service-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693063 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693081 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxhz\" (UniqueName: \"kubernetes.io/projected/882e7562-0811-4a27-9e79-cae539acc27d-kube-api-access-bhxhz\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691363 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/de5bcbec-966a-4934-b21a-a459ab3eb7bc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693433 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-trusted-ca\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691403 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-images\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693860 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693993 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.694606 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.694854 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.695432 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.696504 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbj4h\" (UniqueName: \"kubernetes.io/projected/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-kube-api-access-gbj4h\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.696620 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-config\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.696737 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.696914 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sxkv\" (UniqueName: \"kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.696976 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697035 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-client\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697058 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-images\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697095 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697122 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af7427ab-0805-477b-b064-f4258cef3ace-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697144 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af7427ab-0805-477b-b064-f4258cef3ace-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697172 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-service-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697196 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-trusted-ca-bundle\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697220 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697278 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-client\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697340 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f83cf2a-8b13-4536-bda7-b21bea494966-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697886 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697970 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-config\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697992 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.698170 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-config\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.698273 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.698491 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.698684 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-service-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.698971 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-config\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699209 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwk4x\" (UniqueName: \"kubernetes.io/projected/62c9b093-fe6a-4484-844b-31bbb4f6b21a-kube-api-access-zwk4x\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699250 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d10c92de-8478-436b-bdc0-0fe231faf35c-proxy-tls\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699281 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699307 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699334 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699363 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699388 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699410 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699435 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-socket-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699463 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699490 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stkb4\" (UniqueName: \"kubernetes.io/projected/d10c92de-8478-436b-bdc0-0fe231faf35c-kube-api-access-stkb4\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699494 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699510 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.700367 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.700732 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.700777 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-service-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.701047 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.701062 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: E0320 06:53:19.701164 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.201142135 +0000 UTC m=+232.460453386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.701555 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e358e5eb-5d33-4510-a9fd-4dff0323f61a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.701691 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e358e5eb-5d33-4510-a9fd-4dff0323f61a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.702530 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.702602 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62c9b093-fe6a-4484-844b-31bbb4f6b21a-metrics-tls\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.703376 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ab617f-fa16-4ff5-ad90-328e952d31fb-serving-cert\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.703732 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.703754 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.704447 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f83cf2a-8b13-4536-bda7-b21bea494966-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.704604 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.704603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.704731 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.704835 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3ca072d-707e-4c94-9b3a-81eabc72f840-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.705031 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.705087 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.705289 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2261aa95-8cc5-4fe7-9515-a065c381aa5b-serving-cert\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.705313 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de5bcbec-966a-4934-b21a-a459ab3eb7bc-serving-cert\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.705771 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.706120 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.706793 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.707279 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-client\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.708609 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-serving-cert\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.708768 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e49af127-1dfc-4213-b763-a4283104f38f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.719890 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.739392 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.759025 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.779396 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.799534 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800048 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-node-pullsecrets\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800076 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-registration-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800107 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800125 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800142 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a437188c-af0a-415d-9b0e-9e5b66f41ea3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800158 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-csi-data-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800189 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-audit-policies\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800194 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-node-pullsecrets\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-encryption-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800228 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800246 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800271 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-auth-proxy-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800284 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-csi-data-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800285 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntblj\" (UniqueName: \"kubernetes.io/projected/c87c53d2-e35b-43e3-910e-852b635c46b8-kube-api-access-ntblj\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800325 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800352 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-audit\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a437188c-af0a-415d-9b0e-9e5b66f41ea3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800436 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbx4\" (UniqueName: \"kubernetes.io/projected/ebaac2a5-0001-4d47-9d55-8ff138364356-kube-api-access-qhbx4\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800460 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800494 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4jmx\" (UniqueName: \"kubernetes.io/projected/11250cf1-2849-42f6-8a9c-85d673b4b097-kube-api-access-j4jmx\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800518 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-audit-dir\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800560 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800586 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhxhz\" (UniqueName: \"kubernetes.io/projected/882e7562-0811-4a27-9e79-cae539acc27d-kube-api-access-bhxhz\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800594 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-audit-dir\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800608 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800655 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800655 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-registration-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800677 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-client\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800746 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-images\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800788 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sxkv\" (UniqueName: \"kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af7427ab-0805-477b-b064-f4258cef3ace-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800869 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af7427ab-0805-477b-b064-f4258cef3ace-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800895 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-trusted-ca-bundle\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800935 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800965 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d10c92de-8478-436b-bdc0-0fe231faf35c-proxy-tls\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800991 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801015 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-socket-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801069 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801094 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stkb4\" (UniqueName: \"kubernetes.io/projected/d10c92de-8478-436b-bdc0-0fe231faf35c-kube-api-access-stkb4\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801117 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-serving-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801137 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-socket-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801138 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-serving-cert\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22cf75b6-1525-436a-9999-96f3b2393a03-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801202 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801221 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-machine-approver-tls\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801237 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/882e7562-0811-4a27-9e79-cae539acc27d-audit-dir\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801253 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af7427ab-0805-477b-b064-f4258cef3ace-config\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801269 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22cf75b6-1525-436a-9999-96f3b2393a03-proxy-tls\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801300 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-srv-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mdvw\" (UniqueName: \"kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801330 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6490da1-20d4-4a12-bf24-50e24f3217dc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801348 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801367 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-mountpoint-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801387 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gflmz\" (UniqueName: \"kubernetes.io/projected/a437188c-af0a-415d-9b0e-9e5b66f41ea3-kube-api-access-gflmz\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801406 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801421 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjlb\" (UniqueName: \"kubernetes.io/projected/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-kube-api-access-xxjlb\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801454 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801469 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4krs5\" (UniqueName: \"kubernetes.io/projected/22cf75b6-1525-436a-9999-96f3b2393a03-kube-api-access-4krs5\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801504 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6490da1-20d4-4a12-bf24-50e24f3217dc-config\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801518 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6490da1-20d4-4a12-bf24-50e24f3217dc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801551 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801571 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-plugins-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801640 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-images\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801651 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-plugins-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801658 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801674 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/882e7562-0811-4a27-9e79-cae539acc27d-audit-dir\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801687 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801911 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-mountpoint-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.802148 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22cf75b6-1525-436a-9999-96f3b2393a03-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.802402 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6490da1-20d4-4a12-bf24-50e24f3217dc-config\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.803115 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: E0320 06:53:19.803303 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.303288635 +0000 UTC m=+232.562599856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.806752 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6490da1-20d4-4a12-bf24-50e24f3217dc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.806799 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d10c92de-8478-436b-bdc0-0fe231faf35c-proxy-tls\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.806999 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22cf75b6-1525-436a-9999-96f3b2393a03-proxy-tls\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.819496 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.839466 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.860236 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.879577 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.899316 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.902176 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:19 crc kubenswrapper[5136]: E0320 06:53:19.902348 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.402328537 +0000 UTC m=+232.661639688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.902427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: E0320 06:53:19.903034 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.403026528 +0000 UTC m=+232.662337679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.906041 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af7427ab-0805-477b-b064-f4258cef3ace-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.918946 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.925083 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-srv-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.939724 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.944458 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.946698 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.958797 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.963839 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af7427ab-0805-477b-b064-f4258cef3ace-config\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.979251 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.000452 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.004633 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.004750 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.504732024 +0000 UTC m=+232.764043185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.005554 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.005985 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.505967093 +0000 UTC m=+232.765278244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.019790 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.039981 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.064618 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.079796 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.099410 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.106297 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.106521 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.606494102 +0000 UTC m=+232.865805293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.106660 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.107170 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.607125371 +0000 UTC m=+232.866436522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.119750 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.126124 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.139233 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.144071 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.160033 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.162193 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.179657 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.199537 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.208184 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.209067 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.709038283 +0000 UTC m=+232.968349474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.219850 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.239612 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.260468 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.279945 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.285519 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a437188c-af0a-415d-9b0e-9e5b66f41ea3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.299649 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.301954 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a437188c-af0a-415d-9b0e-9e5b66f41ea3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.311339 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.311931 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.811906675 +0000 UTC m=+233.071217866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.320314 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.339574 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.359243 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.380359 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.385810 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-machine-approver-tls\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.396654 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.396663 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.396961 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.399517 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.401895 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-auth-proxy-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.412567 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.412781 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.912748823 +0000 UTC m=+233.172060014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.413905 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.414589 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.914562911 +0000 UTC m=+233.173874102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.420399 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.421353 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.440146 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.460867 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.480282 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.499780 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.516180 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.516530 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.016491063 +0000 UTC m=+233.275802214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.516958 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.517653 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.017619309 +0000 UTC m=+233.276930650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.533236 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.540477 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.560194 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.580710 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.597662 5136 request.go:700] Waited for 1.01183552s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0 Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.599706 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.602335 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-audit\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.618921 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.619170 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.119138669 +0000 UTC m=+233.378449850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.620018 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.620588 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.120565244 +0000 UTC m=+233.379876425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.621451 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.637107 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-client\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.640466 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.650797 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-serving-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.659615 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.665605 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-serving-cert\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.680425 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.685385 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-encryption-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.700752 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.701643 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.723600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.723903 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.223868819 +0000 UTC m=+233.483180020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.724804 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.725330 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.225306405 +0000 UTC m=+233.484617586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.730988 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.733515 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-trusted-ca-bundle\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.740404 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.759367 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.779713 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.782098 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-audit-policies\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.800538 5136 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.800736 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume podName:d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.300700161 +0000 UTC m=+233.560011352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume") pod "collect-profiles-29566485-n6252" (UID: "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.800803 5136 secret.go:188] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.800918 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client podName:882e7562-0811-4a27-9e79-cae539acc27d nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.300884857 +0000 UTC m=+233.560196038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client") pod "apiserver-7bbb656c7d-mq9hd" (UID: "882e7562-0811-4a27-9e79-cae539acc27d") : failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.800983 5136 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.801053 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca podName:882e7562-0811-4a27-9e79-cae539acc27d nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.301022222 +0000 UTC m=+233.560333413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca") pod "apiserver-7bbb656c7d-mq9hd" (UID: "882e7562-0811-4a27-9e79-cae539acc27d") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802076 5136 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802183 5136 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802227 5136 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802331 5136 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802258 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert podName:882e7562-0811-4a27-9e79-cae539acc27d nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.302224599 +0000 UTC m=+233.561535780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert") pod "apiserver-7bbb656c7d-mq9hd" (UID: "882e7562-0811-4a27-9e79-cae539acc27d") : failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802436 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config podName:882e7562-0811-4a27-9e79-cae539acc27d nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.302391574 +0000 UTC m=+233.561702765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config") pod "apiserver-7bbb656c7d-mq9hd" (UID: "882e7562-0811-4a27-9e79-cae539acc27d") : failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802514 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca podName:11250cf1-2849-42f6-8a9c-85d673b4b097 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.302458566 +0000 UTC m=+233.561769757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca") pod "apiserver-76f77b778f-glmlt" (UID: "11250cf1-2849-42f6-8a9c-85d673b4b097") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802554 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle podName:882e7562-0811-4a27-9e79-cae539acc27d nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.302534789 +0000 UTC m=+233.561845970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle") pod "apiserver-7bbb656c7d-mq9hd" (UID: "882e7562-0811-4a27-9e79-cae539acc27d") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.803360 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.821496 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.826644 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.826970 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.326939668 +0000 UTC m=+233.586250859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.827148 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.827699 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.327678731 +0000 UTC m=+233.586989912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.839469 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.859803 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.880312 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.899749 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.919988 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.928960 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.929140 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.429109648 +0000 UTC m=+233.688420829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.929468 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.930052 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.430035818 +0000 UTC m=+233.689346979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.940513 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.960327 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.979568 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.000169 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.020182 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.030609 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.030890 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.530857806 +0000 UTC m=+233.790168997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.031593 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.032081 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.532064763 +0000 UTC m=+233.791375954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.039458 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.059433 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.079888 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.100299 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.119614 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.132522 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.132756 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.632715895 +0000 UTC m=+233.892027086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.133227 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.133585 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.633571192 +0000 UTC m=+233.892882343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.140339 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.160943 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.179958 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.200802 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.220037 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.234423 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.234649 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.734618137 +0000 UTC m=+233.993929328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.235369 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.235868 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.735842115 +0000 UTC m=+233.995153306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.240438 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.259995 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.280094 5136 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.301013 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.319803 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.336447 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.336601 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.836562839 +0000 UTC m=+234.095874030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.336669 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.336787 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.336899 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337090 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337150 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337237 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337335 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337489 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.337620 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.837596312 +0000 UTC m=+234.096907503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337785 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.338252 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.338486 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.339060 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.339661 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.343426 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.346488 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.348571 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.361411 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.380616 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.420243 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.438768 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.438967 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.938940907 +0000 UTC m=+234.198252088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.439427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.439729 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.939719341 +0000 UTC m=+234.199030492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.440025 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.460189 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.480443 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.499498 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.521139 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.539566 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.539934 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.540182 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.040147127 +0000 UTC m=+234.299458328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.540572 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.540965 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.040951661 +0000 UTC m=+234.300262872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.579784 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.598149 5136 request.go:700] Waited for 1.927995265s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.600739 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.620377 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.642221 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.642621 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.142531813 +0000 UTC m=+234.401843014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.643480 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.644155 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.144098962 +0000 UTC m=+234.403410143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.669185 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82z9x\" (UniqueName: \"kubernetes.io/projected/a3ca072d-707e-4c94-9b3a-81eabc72f840-kube-api-access-82z9x\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.687628 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmrhp\" (UniqueName: \"kubernetes.io/projected/e49af127-1dfc-4213-b763-a4283104f38f-kube-api-access-cmrhp\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.708117 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcvzw\" (UniqueName: \"kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.729972 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr7qj\" (UniqueName: \"kubernetes.io/projected/5491b0c6-578a-430a-82db-943e9c7778e5-kube-api-access-dr7qj\") pod \"downloads-7954f5f757-djxmj\" (UID: \"5491b0c6-578a-430a-82db-943e9c7778e5\") " pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.741059 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwfcd\" (UniqueName: \"kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.744321 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.745453 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.746592 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.246561992 +0000 UTC m=+234.505873193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.759082 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.766437 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.766634 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltwh5\" (UniqueName: \"kubernetes.io/projected/2261aa95-8cc5-4fe7-9515-a065c381aa5b-kube-api-access-ltwh5\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.775353 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.784980 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgmj\" (UniqueName: \"kubernetes.io/projected/5f83cf2a-8b13-4536-bda7-b21bea494966-kube-api-access-7kgmj\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.804390 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4vhj\" (UniqueName: \"kubernetes.io/projected/de5bcbec-966a-4934-b21a-a459ab3eb7bc-kube-api-access-h4vhj\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.823981 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl98m\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.848301 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.848940 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.348907327 +0000 UTC m=+234.608218558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.850649 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.875941 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrbcl\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-kube-api-access-jrbcl\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.916041 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.916260 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.935429 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbj4h\" (UniqueName: \"kubernetes.io/projected/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-kube-api-access-gbj4h\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.938836 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.941638 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.944490 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbz86\" (UniqueName: \"kubernetes.io/projected/e358e5eb-5d33-4510-a9fd-4dff0323f61a-kube-api-access-cbz86\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.944583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rgk\" (UniqueName: \"kubernetes.io/projected/f0ab617f-fa16-4ff5-ad90-328e952d31fb-kube-api-access-k5rgk\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.949700 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.950720 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.450701556 +0000 UTC m=+234.710012717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.959775 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwk4x\" (UniqueName: \"kubernetes.io/projected/62c9b093-fe6a-4484-844b-31bbb4f6b21a-kube-api-access-zwk4x\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.974043 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.977049 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntblj\" (UniqueName: \"kubernetes.io/projected/c87c53d2-e35b-43e3-910e-852b635c46b8-kube-api-access-ntblj\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.997106 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.001225 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.011454 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.018608 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbx4\" (UniqueName: \"kubernetes.io/projected/ebaac2a5-0001-4d47-9d55-8ff138364356-kube-api-access-qhbx4\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.037039 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.042525 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4jmx\" (UniqueName: \"kubernetes.io/projected/11250cf1-2849-42f6-8a9c-85d673b4b097-kube-api-access-j4jmx\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.051696 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.053432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.053880 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.553867947 +0000 UTC m=+234.813179098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.060575 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhxhz\" (UniqueName: \"kubernetes.io/projected/882e7562-0811-4a27-9e79-cae539acc27d-kube-api-access-bhxhz\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.075237 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.075442 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sxkv\" (UniqueName: \"kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.115017 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af7427ab-0805-477b-b064-f4258cef3ace-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.132393 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.147008 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stkb4\" (UniqueName: \"kubernetes.io/projected/d10c92de-8478-436b-bdc0-0fe231faf35c-kube-api-access-stkb4\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.148624 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.148798 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.150225 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.155262 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.155729 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.655708647 +0000 UTC m=+234.915019798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.167928 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.173835 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mdvw\" (UniqueName: \"kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.174249 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.176327 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gflmz\" (UniqueName: \"kubernetes.io/projected/a437188c-af0a-415d-9b0e-9e5b66f41ea3-kube-api-access-gflmz\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.181562 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.187381 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4krs5\" (UniqueName: \"kubernetes.io/projected/22cf75b6-1525-436a-9999-96f3b2393a03-kube-api-access-4krs5\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.191249 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6490da1-20d4-4a12-bf24-50e24f3217dc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.194148 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.220365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjlb\" (UniqueName: \"kubernetes.io/projected/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-kube-api-access-xxjlb\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.222540 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.224411 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.236899 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.242136 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.254150 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.257411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.257740 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.757727102 +0000 UTC m=+235.017038253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.261104 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vbjpm"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.261276 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.263744 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-djxmj"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.264873 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.279587 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.297223 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:22 crc kubenswrapper[5136]: W0320 06:53:22.305288 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5491b0c6_578a_430a_82db_943e9c7778e5.slice/crio-3b5682a24840224c0c9b10b308c40afb67c185614903a5bc1d87967d741e3811 WatchSource:0}: Error finding container 3b5682a24840224c0c9b10b308c40afb67c185614903a5bc1d87967d741e3811: Status 404 returned error can't find the container with id 3b5682a24840224c0c9b10b308c40afb67c185614903a5bc1d87967d741e3811 Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359717 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359871 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9598b\" (UniqueName: \"kubernetes.io/projected/8b148c18-da73-4c17-85f7-454eebfe96f8-kube-api-access-9598b\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359896 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-stats-auth\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359915 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/06133c52-727b-4ded-b835-f0f71093b193-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359944 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-apiservice-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359960 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359997 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0541594-5780-4b00-a3c7-3b132a0cde9b-trusted-ca\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360031 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cc8288-8479-40e0-bb0b-4aad0244d57d-config\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360080 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.360391 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.860363718 +0000 UTC m=+235.119674959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360743 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpfx7\" (UniqueName: \"kubernetes.io/projected/f3343084-9f31-46fb-8514-b5391882700a-kube-api-access-tpfx7\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360886 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2p4z\" (UniqueName: \"kubernetes.io/projected/dd410106-c7b7-4706-9b99-38e3597ee713-kube-api-access-z2p4z\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360911 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360933 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkpsr\" (UniqueName: \"kubernetes.io/projected/06133c52-727b-4ded-b835-f0f71093b193-kube-api-access-lkpsr\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360949 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4583d32-b996-4de0-a7a9-3f13086640a2-service-ca-bundle\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360969 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-webhook-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gglvk\" (UniqueName: \"kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361025 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361066 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv4wr\" (UniqueName: \"kubernetes.io/projected/42cc8288-8479-40e0-bb0b-4aad0244d57d-kube-api-access-pv4wr\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4p96\" (UniqueName: \"kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361097 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2djxf\" (UniqueName: \"kubernetes.io/projected/b4583d32-b996-4de0-a7a9-3f13086640a2-kube-api-access-2djxf\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361137 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.361161 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.861141421 +0000 UTC m=+235.120452672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361205 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-profile-collector-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361240 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldwsl\" (UniqueName: \"kubernetes.io/projected/ffd3e201-0817-43ed-b8db-d7b526017b69-kube-api-access-ldwsl\") pod \"migrator-59844c95c7-6ckd4\" (UID: \"ffd3e201-0817-43ed-b8db-d7b526017b69\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361272 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3343084-9f31-46fb-8514-b5391882700a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361304 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd410106-c7b7-4706-9b99-38e3597ee713-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361328 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0541594-5780-4b00-a3c7-3b132a0cde9b-metrics-tls\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361390 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-srv-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361602 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjzr9\" (UniqueName: \"kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9\") pod \"auto-csr-approver-29566492-9gbqz\" (UID: \"760c854a-7b9d-4582-9bcc-faf077008e0f\") " pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361701 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-key\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361862 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-default-certificate\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.362008 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7hr7\" (UniqueName: \"kubernetes.io/projected/1939ab6e-c688-43a4-bca6-7cc00e950962-kube-api-access-x7hr7\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.362052 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbht\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-kube-api-access-mvbht\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.362079 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.362105 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-cabundle\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.364595 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.364658 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42cc8288-8479-40e0-bb0b-4aad0244d57d-serving-cert\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.364697 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8b148c18-da73-4c17-85f7-454eebfe96f8-tmpfs\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.364748 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzn76\" (UniqueName: \"kubernetes.io/projected/8eccc00e-2821-4e84-9040-6aa1e58daf78-kube-api-access-bzn76\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.364769 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-metrics-certs\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.437943 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.451571 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rmdpp"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.451713 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465331 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465482 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/06133c52-727b-4ded-b835-f0f71093b193-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465505 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.465560 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.965539552 +0000 UTC m=+235.224850703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465606 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-apiservice-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465694 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0541594-5780-4b00-a3c7-3b132a0cde9b-trusted-ca\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465714 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmn8\" (UniqueName: \"kubernetes.io/projected/f179a691-95b5-4d8a-9f4f-48267b8587a7-kube-api-access-zdmn8\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465807 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdkkm\" (UniqueName: \"kubernetes.io/projected/e31fd981-67e5-461a-b43c-89a38265e7ed-kube-api-access-jdkkm\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465872 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cc8288-8479-40e0-bb0b-4aad0244d57d-config\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465890 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e31fd981-67e5-461a-b43c-89a38265e7ed-config-volume\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465952 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465975 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpfx7\" (UniqueName: \"kubernetes.io/projected/f3343084-9f31-46fb-8514-b5391882700a-kube-api-access-tpfx7\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466009 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466084 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2p4z\" (UniqueName: \"kubernetes.io/projected/dd410106-c7b7-4706-9b99-38e3597ee713-kube-api-access-z2p4z\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466116 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466137 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkpsr\" (UniqueName: \"kubernetes.io/projected/06133c52-727b-4ded-b835-f0f71093b193-kube-api-access-lkpsr\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466153 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4583d32-b996-4de0-a7a9-3f13086640a2-service-ca-bundle\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466169 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-webhook-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466199 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gglvk\" (UniqueName: \"kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466226 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-node-bootstrap-token\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466247 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e31fd981-67e5-461a-b43c-89a38265e7ed-metrics-tls\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466279 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466318 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv4wr\" (UniqueName: \"kubernetes.io/projected/42cc8288-8479-40e0-bb0b-4aad0244d57d-kube-api-access-pv4wr\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466334 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4p96\" (UniqueName: \"kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466349 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2djxf\" (UniqueName: \"kubernetes.io/projected/b4583d32-b996-4de0-a7a9-3f13086640a2-kube-api-access-2djxf\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466375 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-certs\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466420 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466437 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-profile-collector-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466474 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldwsl\" (UniqueName: \"kubernetes.io/projected/ffd3e201-0817-43ed-b8db-d7b526017b69-kube-api-access-ldwsl\") pod \"migrator-59844c95c7-6ckd4\" (UID: \"ffd3e201-0817-43ed-b8db-d7b526017b69\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466522 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3343084-9f31-46fb-8514-b5391882700a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466559 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd410106-c7b7-4706-9b99-38e3597ee713-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466575 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0541594-5780-4b00-a3c7-3b132a0cde9b-metrics-tls\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466606 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-srv-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466651 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjzr9\" (UniqueName: \"kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9\") pod \"auto-csr-approver-29566492-9gbqz\" (UID: \"760c854a-7b9d-4582-9bcc-faf077008e0f\") " pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466689 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-key\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466705 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22df33b0-12a4-40ed-b739-85240eb615e7-cert\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466735 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-default-certificate\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466779 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfpw\" (UniqueName: \"kubernetes.io/projected/22df33b0-12a4-40ed-b739-85240eb615e7-kube-api-access-nsfpw\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466829 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7hr7\" (UniqueName: \"kubernetes.io/projected/1939ab6e-c688-43a4-bca6-7cc00e950962-kube-api-access-x7hr7\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466846 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbht\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-kube-api-access-mvbht\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466864 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466918 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-cabundle\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466952 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466987 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42cc8288-8479-40e0-bb0b-4aad0244d57d-serving-cert\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.467044 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8b148c18-da73-4c17-85f7-454eebfe96f8-tmpfs\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.467070 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzn76\" (UniqueName: \"kubernetes.io/projected/8eccc00e-2821-4e84-9040-6aa1e58daf78-kube-api-access-bzn76\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.467085 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-metrics-certs\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.467170 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9598b\" (UniqueName: \"kubernetes.io/projected/8b148c18-da73-4c17-85f7-454eebfe96f8-kube-api-access-9598b\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.467186 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-stats-auth\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.468315 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.469889 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4583d32-b996-4de0-a7a9-3f13086640a2-service-ca-bundle\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.470188 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8b148c18-da73-4c17-85f7-454eebfe96f8-tmpfs\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.470791 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0541594-5780-4b00-a3c7-3b132a0cde9b-trusted-ca\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.470861 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.471421 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.971406507 +0000 UTC m=+235.230717658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.472393 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cc8288-8479-40e0-bb0b-4aad0244d57d-config\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.472472 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.473297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.473420 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-stats-auth\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.475876 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-cabundle\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.475919 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-key\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.476360 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/06133c52-727b-4ded-b835-f0f71093b193-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.476900 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.477767 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3343084-9f31-46fb-8514-b5391882700a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.478515 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-apiservice-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.486219 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.486303 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-metrics-certs\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.488838 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-webhook-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.489345 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42cc8288-8479-40e0-bb0b-4aad0244d57d-serving-cert\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.490801 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0541594-5780-4b00-a3c7-3b132a0cde9b-metrics-tls\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.491118 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd410106-c7b7-4706-9b99-38e3597ee713-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.494673 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-srv-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.494751 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-default-certificate\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.496367 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-profile-collector-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.497403 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" event={"ID":"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7","Type":"ContainerStarted","Data":"6f972abf3e722d5aaab333938b3513d43785c466ec4ea4bb2bc0de16b861c464"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.505936 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-274sn"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.506039 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" event={"ID":"e49af127-1dfc-4213-b763-a4283104f38f","Type":"ContainerStarted","Data":"24d7d61298f331ce5005b0f443fb1e3b95ee217bcec80d7c1f66854a007d49f9"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.506874 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.508451 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-djxmj" event={"ID":"5491b0c6-578a-430a-82db-943e9c7778e5","Type":"ContainerStarted","Data":"a976903997ec15076b987fc47679fc1f389241cb6da3d09436a4536cc6ee6b64"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.508485 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-djxmj" event={"ID":"5491b0c6-578a-430a-82db-943e9c7778e5","Type":"ContainerStarted","Data":"3b5682a24840224c0c9b10b308c40afb67c185614903a5bc1d87967d741e3811"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.508797 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.510902 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bjqjp" event={"ID":"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448","Type":"ContainerStarted","Data":"1705fa19bd8f5aa96bc704e7afa6e708e4641f98ce0af56ebad4536addf3960e"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.511955 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" event={"ID":"a3ca072d-707e-4c94-9b3a-81eabc72f840","Type":"ContainerStarted","Data":"6bde56da44b70546553466d69cdfb895655dbb9c29ecdea02349c9e84d6cfc3f"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.514008 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gjxt"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.515373 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjzr9\" (UniqueName: \"kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9\") pod \"auto-csr-approver-29566492-9gbqz\" (UID: \"760c854a-7b9d-4582-9bcc-faf077008e0f\") " pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.531959 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9598b\" (UniqueName: \"kubernetes.io/projected/8b148c18-da73-4c17-85f7-454eebfe96f8-kube-api-access-9598b\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.552558 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv4wr\" (UniqueName: \"kubernetes.io/projected/42cc8288-8479-40e0-bb0b-4aad0244d57d-kube-api-access-pv4wr\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.555115 5136 patch_prober.go:28] interesting pod/downloads-7954f5f757-djxmj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.555159 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-djxmj" podUID="5491b0c6-578a-430a-82db-943e9c7778e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 20 06:53:22 crc kubenswrapper[5136]: W0320 06:53:22.563218 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2261aa95_8cc5_4fe7_9515_a065c381aa5b.slice/crio-654898de6202000e3542c5ff7a29d4dc1b9ece9e166152c2aa19f5d10b9b55eb WatchSource:0}: Error finding container 654898de6202000e3542c5ff7a29d4dc1b9ece9e166152c2aa19f5d10b9b55eb: Status 404 returned error can't find the container with id 654898de6202000e3542c5ff7a29d4dc1b9ece9e166152c2aa19f5d10b9b55eb Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567504 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567679 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsfpw\" (UniqueName: \"kubernetes.io/projected/22df33b0-12a4-40ed-b739-85240eb615e7-kube-api-access-nsfpw\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567744 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdmn8\" (UniqueName: \"kubernetes.io/projected/f179a691-95b5-4d8a-9f4f-48267b8587a7-kube-api-access-zdmn8\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567771 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdkkm\" (UniqueName: \"kubernetes.io/projected/e31fd981-67e5-461a-b43c-89a38265e7ed-kube-api-access-jdkkm\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e31fd981-67e5-461a-b43c-89a38265e7ed-config-volume\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567882 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-node-bootstrap-token\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567897 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e31fd981-67e5-461a-b43c-89a38265e7ed-metrics-tls\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.568191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-certs\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.568238 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22df33b0-12a4-40ed-b739-85240eb615e7-cert\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.568585 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e31fd981-67e5-461a-b43c-89a38265e7ed-config-volume\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.568760 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.068732124 +0000 UTC m=+235.328043295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.572676 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-certs\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.575185 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e31fd981-67e5-461a-b43c-89a38265e7ed-metrics-tls\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.577239 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22df33b0-12a4-40ed-b739-85240eb615e7-cert\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.577763 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-node-bootstrap-token\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.599339 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2p4z\" (UniqueName: \"kubernetes.io/projected/dd410106-c7b7-4706-9b99-38e3597ee713-kube-api-access-z2p4z\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.612342 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.617760 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4p96\" (UniqueName: \"kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.646385 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.647264 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkpsr\" (UniqueName: \"kubernetes.io/projected/06133c52-727b-4ded-b835-f0f71093b193-kube-api-access-lkpsr\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.651565 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.666212 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpfx7\" (UniqueName: \"kubernetes.io/projected/f3343084-9f31-46fb-8514-b5391882700a-kube-api-access-tpfx7\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.670757 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.671249 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.171234015 +0000 UTC m=+235.430545166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.673439 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzn76\" (UniqueName: \"kubernetes.io/projected/8eccc00e-2821-4e84-9040-6aa1e58daf78-kube-api-access-bzn76\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.684870 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.699661 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2djxf\" (UniqueName: \"kubernetes.io/projected/b4583d32-b996-4de0-a7a9-3f13086640a2-kube-api-access-2djxf\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.715079 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldwsl\" (UniqueName: \"kubernetes.io/projected/ffd3e201-0817-43ed-b8db-d7b526017b69-kube-api-access-ldwsl\") pod \"migrator-59844c95c7-6ckd4\" (UID: \"ffd3e201-0817-43ed-b8db-d7b526017b69\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.749468 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gglvk\" (UniqueName: \"kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.766999 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.767621 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbht\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-kube-api-access-mvbht\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.772359 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.772765 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.272750584 +0000 UTC m=+235.532061735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.779831 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7hr7\" (UniqueName: \"kubernetes.io/projected/1939ab6e-c688-43a4-bca6-7cc00e950962-kube-api-access-x7hr7\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.788301 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.805572 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.825258 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.827438 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87cfr"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.829748 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsfpw\" (UniqueName: \"kubernetes.io/projected/22df33b0-12a4-40ed-b739-85240eb615e7-kube-api-access-nsfpw\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.843666 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdkkm\" (UniqueName: \"kubernetes.io/projected/e31fd981-67e5-461a-b43c-89a38265e7ed-kube-api-access-jdkkm\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.846312 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.864779 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.888135 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdmn8\" (UniqueName: \"kubernetes.io/projected/f179a691-95b5-4d8a-9f4f-48267b8587a7-kube-api-access-zdmn8\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.890553 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.891587 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.892026 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.392013164 +0000 UTC m=+235.651324315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.892865 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.899465 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.905223 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.927665 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.954157 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.959036 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.962047 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vbv27"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.968714 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzwlk"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.971164 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.971233 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.976890 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-glmlt"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.984463 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.990526 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.992293 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.992442 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.992628 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.492610284 +0000 UTC m=+235.751921435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.995338 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.995604 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.495592298 +0000 UTC m=+235.754903449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:22.999348 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.007523 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.029621 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.047042 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.077179 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.083453 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.087609 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.104716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.105664 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.605640376 +0000 UTC m=+235.864951547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.105866 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.106205 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.606198034 +0000 UTC m=+235.865509185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.208483 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.210531 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.211715 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.711688349 +0000 UTC m=+235.970999500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.212393 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.214771 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.714758265 +0000 UTC m=+235.974069406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.221481 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.251964 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.254913 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.314707 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.315405 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.815389446 +0000 UTC m=+236.074700597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.328173 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mmm42"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.333473 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566492-9gbqz"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.335192 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.336368 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.394657 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.408392 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mtj5k"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.417833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.420669 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.920655035 +0000 UTC m=+236.179966186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.518551 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.519121 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.019102268 +0000 UTC m=+236.278413419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.520735 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" event={"ID":"ffd3e201-0817-43ed-b8db-d7b526017b69","Type":"ContainerStarted","Data":"f9c435b1361f4aa73cc446911a88f27572607f10d212cff6310986474dcd161e"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.531440 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" event={"ID":"dd410106-c7b7-4706-9b99-38e3597ee713","Type":"ContainerStarted","Data":"a0ce110363f63f001f95e50210a962eb678228594f1ac7e014264cd9f0806c96"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.544562 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" event={"ID":"42cc8288-8479-40e0-bb0b-4aad0244d57d","Type":"ContainerStarted","Data":"c80732a9eb4856d211d0a8ceacc7f8b714f2228a260994c0779351420bd9d0bc"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.560561 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" event={"ID":"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7","Type":"ContainerStarted","Data":"598f31a9e66b82a40e6fddae66878ad2c2398e7d79d5ccb145dad09f970accbe"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.560627 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" event={"ID":"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7","Type":"ContainerStarted","Data":"d34b3917abfbbe4a783befb983a39bd2afed047371e4e79761022e7cdf941b29"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.564461 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" event={"ID":"760c854a-7b9d-4582-9bcc-faf077008e0f","Type":"ContainerStarted","Data":"e58d8ba4116f562c0c29dade18892c723e48babb4ac158f18e7e7f62c2685db2"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.567935 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" event={"ID":"22cf75b6-1525-436a-9999-96f3b2393a03","Type":"ContainerStarted","Data":"6bb5e1527d984418016c71f8416f2f0c7d5ade22bc293ba6a117fc47606500fd"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.570551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" event={"ID":"a6490da1-20d4-4a12-bf24-50e24f3217dc","Type":"ContainerStarted","Data":"5db55060d20d852f02c153307634b83636e43c71b86bffdfcc266d2ad3398a33"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.571531 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" event={"ID":"882e7562-0811-4a27-9e79-cae539acc27d","Type":"ContainerStarted","Data":"4827fb4e7d836e80237514799f011d13bd3cecb330a4419a8bb1261c16b45603"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.572702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" event={"ID":"e358e5eb-5d33-4510-a9fd-4dff0323f61a","Type":"ContainerStarted","Data":"e60454c1f81731705b4ba3dc416d89873cc63aa52dfd85ef9dc61c25012e5c21"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.573501 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" event={"ID":"2261aa95-8cc5-4fe7-9515-a065c381aa5b","Type":"ContainerStarted","Data":"3fe424078c970477ad8ebe6562f6b3b232c3d5300b93606ffc8a11a3e84a1f5f"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.573517 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" event={"ID":"2261aa95-8cc5-4fe7-9515-a065c381aa5b","Type":"ContainerStarted","Data":"654898de6202000e3542c5ff7a29d4dc1b9ece9e166152c2aa19f5d10b9b55eb"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.577240 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" event={"ID":"1a566282-9a27-4172-b5ba-574e0179cfc4","Type":"ContainerStarted","Data":"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.577261 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" event={"ID":"1a566282-9a27-4172-b5ba-574e0179cfc4","Type":"ContainerStarted","Data":"df39f87d48bdc4108cfbbd23c050e3dcecc77d5d9cf9eff9e81e1a0106f177c3"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.577693 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.578402 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" event={"ID":"11250cf1-2849-42f6-8a9c-85d673b4b097","Type":"ContainerStarted","Data":"3094e3e51ca50cd738f21abc71147878b8b084370b1ac58f9d3d57419cc269e4"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.581036 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" event={"ID":"f0ab617f-fa16-4ff5-ad90-328e952d31fb","Type":"ContainerStarted","Data":"a4448870c6cb456232422ba7a723d64a0b4ec045ffd860fb62efe247fbcbf8a4"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.581058 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" event={"ID":"f0ab617f-fa16-4ff5-ad90-328e952d31fb","Type":"ContainerStarted","Data":"742bb152e88fa346f9d3b1bc753b8452eea78b7a4cca847119ee6686fa16e0e7"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.581577 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.581927 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" event={"ID":"62c9b093-fe6a-4484-844b-31bbb4f6b21a","Type":"ContainerStarted","Data":"42b5f734c5b9ca1316f645936059541daf3fe186114025cb4fc14e8310c86195"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.582762 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" event={"ID":"c87c53d2-e35b-43e3-910e-852b635c46b8","Type":"ContainerStarted","Data":"0358dbe9fe3db61891eed24111a66c38993609435b78293e8eaf8af29fdf7324"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.582957 5136 patch_prober.go:28] interesting pod/console-operator-58897d9998-7gjxt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.582986 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" podUID="f0ab617f-fa16-4ff5-ad90-328e952d31fb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.584690 5136 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6s42p container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.584745 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.588129 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" event={"ID":"edd610c6-14f6-4da1-83ab-b816dac3ed91","Type":"ContainerStarted","Data":"6e4d56e84d0e4688ac8677a97bc75a219910aeea20b1d12dc228013a436922f3"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.593058 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" event={"ID":"ebaac2a5-0001-4d47-9d55-8ff138364356","Type":"ContainerStarted","Data":"fbcf5c5f625e2ed06dffb55171a7e2b2e24cb675a14f792fee9f0be1f9faeed4"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.594524 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" event={"ID":"af7427ab-0805-477b-b064-f4258cef3ace","Type":"ContainerStarted","Data":"7885db517f939ea607039c0ddd0a81609a16502d05d6704ad32a0b3165db7ecc"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.601231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" event={"ID":"e49af127-1dfc-4213-b763-a4283104f38f","Type":"ContainerStarted","Data":"22bf4e5ebeb37f84bd92b2e2813992ed7bb28491de0ca07513340ef547839e00"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.601269 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" event={"ID":"e49af127-1dfc-4213-b763-a4283104f38f","Type":"ContainerStarted","Data":"cb5eadcc27168192635505790744ba41275f4c41315da7278d313b5e06c7924f"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.602111 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" event={"ID":"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8","Type":"ContainerStarted","Data":"07da4e107a7ee6b904be95db1fd6b4beceb4d8ed54972900d21d82ae0100b768"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.604409 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" event={"ID":"5f83cf2a-8b13-4536-bda7-b21bea494966","Type":"ContainerStarted","Data":"eca0407830eed44c4f91baab34017521547657d1ce75fb6d0333f68c340719b6"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.604553 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" event={"ID":"5f83cf2a-8b13-4536-bda7-b21bea494966","Type":"ContainerStarted","Data":"2b63c02749b4ff52bdafe3b161226eb31a8f0572774e8b5d1505df5835a2109d"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.606211 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" event={"ID":"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713","Type":"ContainerStarted","Data":"1f8bb7528bcd674ada09b3ecb220cf2f80827c5340131f4615206beea3f0b2b6"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.606945 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" event={"ID":"d0541594-5780-4b00-a3c7-3b132a0cde9b","Type":"ContainerStarted","Data":"84afc305742688cc02836b4a533c51fe1003b758b7eb485230faa6c42e624c3b"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.607725 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" event={"ID":"a437188c-af0a-415d-9b0e-9e5b66f41ea3","Type":"ContainerStarted","Data":"cf6e67e61d17f2b7c25746fa58ae35594ce1fcf236d35ffd8a5a9091f71bc489"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.609492 5136 generic.go:334] "Generic (PLEG): container finished" podID="de5bcbec-966a-4934-b21a-a459ab3eb7bc" containerID="45368ec78c24e1437817b0124ca474de0300b9efd63638fb548ad9b8c674e448" exitCode=0 Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.609539 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" event={"ID":"de5bcbec-966a-4934-b21a-a459ab3eb7bc","Type":"ContainerDied","Data":"45368ec78c24e1437817b0124ca474de0300b9efd63638fb548ad9b8c674e448"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.609555 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" event={"ID":"de5bcbec-966a-4934-b21a-a459ab3eb7bc","Type":"ContainerStarted","Data":"6eb13fc102482510057d7a3e68e222a10a051546686f4d258231548876af1570"} Mar 20 06:53:23 crc kubenswrapper[5136]: W0320 06:53:23.610921 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4583d32_b996_4de0_a7a9_3f13086640a2.slice/crio-1a3c085caf9656e5555545e3c017d027a963769fbf4b120333ad54301bd9c6a0 WatchSource:0}: Error finding container 1a3c085caf9656e5555545e3c017d027a963769fbf4b120333ad54301bd9c6a0: Status 404 returned error can't find the container with id 1a3c085caf9656e5555545e3c017d027a963769fbf4b120333ad54301bd9c6a0 Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.612173 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" event={"ID":"a3ca072d-707e-4c94-9b3a-81eabc72f840","Type":"ContainerStarted","Data":"1f4189f045ae2aca8b168aff9fae0641bbbd4d4ba32863b2e4a93d6d8ce9d1f3"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.612198 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" event={"ID":"a3ca072d-707e-4c94-9b3a-81eabc72f840","Type":"ContainerStarted","Data":"5666e6a7a8c40f224d03d32021f4ab6bbb2a35ed335823806087a5d0d8c0e49c"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.613958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" event={"ID":"a1dff0e1-4e1b-49cc-bc54-d157138a2d20","Type":"ContainerStarted","Data":"186be6371f7e6de9733e46c8382ac36e1e88865fdf462dab21c0c1e0c3c4b946"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.614593 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" event={"ID":"d10c92de-8478-436b-bdc0-0fe231faf35c","Type":"ContainerStarted","Data":"7c8708c89211de774fe69cbbde740b1d14bcd87bd62504d19fad3906617cc6cf"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.616017 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" event={"ID":"8b148c18-da73-4c17-85f7-454eebfe96f8","Type":"ContainerStarted","Data":"8a997786fc0c66a6a40628c921501baeb9d2ee3f045da955946a24f19ccdf924"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.617794 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerStarted","Data":"8ab9396d1b0bd00b43015624038265fccd12c5928575d3620513f24c6d495ec3"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.619039 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" event={"ID":"246c7ce4-1953-4a0c-9fed-cabc26f79f3f","Type":"ContainerStarted","Data":"b3048d5cbd327c31d2c12352fc9831d3d60de45f5c32a9d9232ae18a248453d5"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.619537 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.619883 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.119872344 +0000 UTC m=+236.379183495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.622466 5136 patch_prober.go:28] interesting pod/downloads-7954f5f757-djxmj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.622493 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-djxmj" podUID="5491b0c6-578a-430a-82db-943e9c7778e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.622522 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bjqjp" event={"ID":"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448","Type":"ContainerStarted","Data":"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.720260 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.720392 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.220372261 +0000 UTC m=+236.479683422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.721225 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.726707 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.22668978 +0000 UTC m=+236.486000931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.740172 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.746637 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jvq8j"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.782059 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wnlnd"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.804254 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vwn87"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.806657 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.823243 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.823484 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.32346018 +0000 UTC m=+236.582771341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.823978 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.825779 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.325766122 +0000 UTC m=+236.585077273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.834896 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:23 crc kubenswrapper[5136]: W0320 06:53:23.846216 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3343084_9f31_46fb_8514_b5391882700a.slice/crio-d14aebe384dee05a2f75d06ca638439bc26f535fd08e2a7e48ab58bf7aefe094 WatchSource:0}: Error finding container d14aebe384dee05a2f75d06ca638439bc26f535fd08e2a7e48ab58bf7aefe094: Status 404 returned error can't find the container with id d14aebe384dee05a2f75d06ca638439bc26f535fd08e2a7e48ab58bf7aefe094 Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.926721 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.926879 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.426861369 +0000 UTC m=+236.686172520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.927376 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.927663 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.427654684 +0000 UTC m=+236.686965835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.034507 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.034722 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.534697728 +0000 UTC m=+236.794008879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.035858 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.036152 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.536142113 +0000 UTC m=+236.795453264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.136843 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.136982 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.6369527 +0000 UTC m=+236.896263861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.137095 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.137467 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.637456176 +0000 UTC m=+236.896767417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.242907 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.243684 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.743668004 +0000 UTC m=+237.002979155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.276624 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" podStartSLOduration=167.276601821 podStartE2EDuration="2m47.276601821s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.275213418 +0000 UTC m=+236.534524569" watchObservedRunningTime="2026-03-20 06:53:24.276601821 +0000 UTC m=+236.535912972" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.312825 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-djxmj" podStartSLOduration=167.312791152 podStartE2EDuration="2m47.312791152s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.310675716 +0000 UTC m=+236.569986887" watchObservedRunningTime="2026-03-20 06:53:24.312791152 +0000 UTC m=+236.572102303" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.345463 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.345937 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.845797992 +0000 UTC m=+237.105109143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.412557 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" podStartSLOduration=167.412534206 podStartE2EDuration="2m47.412534206s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.358066359 +0000 UTC m=+236.617377510" watchObservedRunningTime="2026-03-20 06:53:24.412534206 +0000 UTC m=+236.671845357" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.415695 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" podStartSLOduration=168.415686975 podStartE2EDuration="2m48.415686975s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.401629932 +0000 UTC m=+236.660941083" watchObservedRunningTime="2026-03-20 06:53:24.415686975 +0000 UTC m=+236.674998126" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.432516 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" podStartSLOduration=167.432501705 podStartE2EDuration="2m47.432501705s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.431851605 +0000 UTC m=+236.691162756" watchObservedRunningTime="2026-03-20 06:53:24.432501705 +0000 UTC m=+236.691812856" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.446236 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.446645 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.9466316 +0000 UTC m=+237.205942751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.480971 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59394: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.483018 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" podStartSLOduration=168.483002556 podStartE2EDuration="2m48.483002556s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.482603535 +0000 UTC m=+236.741914686" watchObservedRunningTime="2026-03-20 06:53:24.483002556 +0000 UTC m=+236.742313707" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.549502 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.549828 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.049800333 +0000 UTC m=+237.309111484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.563388 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" podStartSLOduration=168.56337225 podStartE2EDuration="2m48.56337225s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.515252683 +0000 UTC m=+236.774563834" watchObservedRunningTime="2026-03-20 06:53:24.56337225 +0000 UTC m=+236.822683401" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.591676 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59396: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.635856 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" podStartSLOduration=168.635798443 podStartE2EDuration="2m48.635798443s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.634405508 +0000 UTC m=+236.893716659" watchObservedRunningTime="2026-03-20 06:53:24.635798443 +0000 UTC m=+236.895109614" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.674575 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.675014 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.174998818 +0000 UTC m=+237.434309969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.683503 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-bjqjp" podStartSLOduration=167.683487956 podStartE2EDuration="2m47.683487956s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.682756702 +0000 UTC m=+236.942067853" watchObservedRunningTime="2026-03-20 06:53:24.683487956 +0000 UTC m=+236.942799117" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.684265 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59404: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.722167 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vwn87" event={"ID":"22df33b0-12a4-40ed-b739-85240eb615e7","Type":"ContainerStarted","Data":"6cf3da78a059636f87f1671b6084b4b1433074b3a2936ed60fa3505b1d65ae6e"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.779658 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59420: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.780835 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.781187 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.281173104 +0000 UTC m=+237.540484255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.789641 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8mwfm" event={"ID":"f179a691-95b5-4d8a-9f4f-48267b8587a7","Type":"ContainerStarted","Data":"0f3c0b6af995be637e4fce9b2a07c264e92da35e1c58785c07511a1e1c88fd71"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.806140 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" event={"ID":"d10c92de-8478-436b-bdc0-0fe231faf35c","Type":"ContainerStarted","Data":"1b0693c3f9f454fae42bc57cb3a4a88e3443ac311557e5494f894e21066aa033"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.813726 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" event={"ID":"f3343084-9f31-46fb-8514-b5391882700a","Type":"ContainerStarted","Data":"d14aebe384dee05a2f75d06ca638439bc26f535fd08e2a7e48ab58bf7aefe094"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.825006 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" event={"ID":"a437188c-af0a-415d-9b0e-9e5b66f41ea3","Type":"ContainerStarted","Data":"565846fe77d2ea3bc210e26fd8a30a612e89ef72fc426febe9ef6b899e40a7ce"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.826740 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-x4wkf" event={"ID":"b4583d32-b996-4de0-a7a9-3f13086640a2","Type":"ContainerStarted","Data":"1a3c085caf9656e5555545e3c017d027a963769fbf4b120333ad54301bd9c6a0"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.834582 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" event={"ID":"ebaac2a5-0001-4d47-9d55-8ff138364356","Type":"ContainerStarted","Data":"0d984c5a7bc11a44a6c7f8d45353c41f4072a48ab7d51af7e13bc1c65e18ab3a"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.842221 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" event={"ID":"af7427ab-0805-477b-b064-f4258cef3ace","Type":"ContainerStarted","Data":"5616158dc7182557225c3e0b26c91144b02da9b106af89bcff6fea6f07227981"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.858765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" event={"ID":"6ac92b4e-38e5-4858-8b93-41afb63e9cdd","Type":"ContainerStarted","Data":"702c928592059046850fb0c9bb71fb8788e55d41bf051a0e0c5227c4a0538c5b"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.859502 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.860542 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8mwfm" podStartSLOduration=5.860500795 podStartE2EDuration="5.860500795s" podCreationTimestamp="2026-03-20 06:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.812200482 +0000 UTC m=+237.071511633" watchObservedRunningTime="2026-03-20 06:53:24.860500795 +0000 UTC m=+237.119811946" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.861328 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" podStartSLOduration=167.861319 podStartE2EDuration="2m47.861319s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.858483561 +0000 UTC m=+237.117794712" watchObservedRunningTime="2026-03-20 06:53:24.861319 +0000 UTC m=+237.120630151" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.866880 5136 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jvzhk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.866920 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.868173 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" event={"ID":"a6490da1-20d4-4a12-bf24-50e24f3217dc","Type":"ContainerStarted","Data":"40d12e27b1b19ae666e0ac71be2a2c5717c7ddd546f4dcae698bd602abad54b9"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.872150 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" event={"ID":"de5bcbec-966a-4934-b21a-a459ab3eb7bc","Type":"ContainerStarted","Data":"38d1a1bce50aeb2082776397275c0bdad339eeac616164dd2e3f1bc765c49965"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.872758 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.891555 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" event={"ID":"42cc8288-8479-40e0-bb0b-4aad0244d57d","Type":"ContainerStarted","Data":"c4e8486ad2e7ad22d118a8367ae962d3d1cd87e42f1f0e39bcfedffc2be4a27c"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.892535 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.893649 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.393632609 +0000 UTC m=+237.652943760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.908756 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" podStartSLOduration=167.908733914 podStartE2EDuration="2m47.908733914s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.907023021 +0000 UTC m=+237.166334172" watchObservedRunningTime="2026-03-20 06:53:24.908733914 +0000 UTC m=+237.168045065" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.909614 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-x4wkf" podStartSLOduration=167.909606932 podStartE2EDuration="2m47.909606932s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.879293767 +0000 UTC m=+237.138604928" watchObservedRunningTime="2026-03-20 06:53:24.909606932 +0000 UTC m=+237.168918083" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.910771 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59426: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.913849 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" event={"ID":"62c9b093-fe6a-4484-844b-31bbb4f6b21a","Type":"ContainerStarted","Data":"40651d8394309da2c5382af6bf09911a60224a8b222be416700281c8f9b80d53"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.915575 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" event={"ID":"e358e5eb-5d33-4510-a9fd-4dff0323f61a","Type":"ContainerStarted","Data":"3fc945b86d83faded0b9a2ca420cea83c82c4b443395115773ea8f3c7b43628b"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.949645 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" podStartSLOduration=167.949627844 podStartE2EDuration="2m47.949627844s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.948274061 +0000 UTC m=+237.207585242" watchObservedRunningTime="2026-03-20 06:53:24.949627844 +0000 UTC m=+237.208938995" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.959405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" event={"ID":"22cf75b6-1525-436a-9999-96f3b2393a03","Type":"ContainerStarted","Data":"8622ea42bb57461814d407a2a8e7f7209b99e81fadf2196cd346b13f9f089973"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.962992 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" event={"ID":"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713","Type":"ContainerStarted","Data":"25576b641af1731e1c251801ae920fcd50b8835923991de3496e7405af6a72bf"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.969648 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" podStartSLOduration=167.969628534 podStartE2EDuration="2m47.969628534s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.96856189 +0000 UTC m=+237.227873041" watchObservedRunningTime="2026-03-20 06:53:24.969628534 +0000 UTC m=+237.228939685" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.972435 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" event={"ID":"edd610c6-14f6-4da1-83ab-b816dac3ed91","Type":"ContainerStarted","Data":"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.972919 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.982654 5136 generic.go:334] "Generic (PLEG): container finished" podID="11250cf1-2849-42f6-8a9c-85d673b4b097" containerID="d9321dfa634b6bc2734db2b183b6c78c25a56c8ef93219d4d5e5dc7ecacb1c58" exitCode=0 Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.982954 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" event={"ID":"11250cf1-2849-42f6-8a9c-85d673b4b097","Type":"ContainerDied","Data":"d9321dfa634b6bc2734db2b183b6c78c25a56c8ef93219d4d5e5dc7ecacb1c58"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.989647 5136 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m4btr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.989702 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.993980 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.994740 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.996994 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59428: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.998639 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.498623598 +0000 UTC m=+237.757934739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.014867 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.014922 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.022825 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" event={"ID":"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8","Type":"ContainerStarted","Data":"f960ca2f1291c6810939c91cb385274efd1428e9971de1fcce80d392e52b2a36"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.029545 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" event={"ID":"ffd3e201-0817-43ed-b8db-d7b526017b69","Type":"ContainerStarted","Data":"3106adddc41afc7b8db88ae34a9fb8c71e28aca084635561785f766119cb834b"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.040876 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" podStartSLOduration=168.040857989 podStartE2EDuration="2m48.040857989s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.015070826 +0000 UTC m=+237.274381977" watchObservedRunningTime="2026-03-20 06:53:25.040857989 +0000 UTC m=+237.300169140" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.041455 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" podStartSLOduration=168.041451338 podStartE2EDuration="2m48.041451338s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.040233409 +0000 UTC m=+237.299544580" watchObservedRunningTime="2026-03-20 06:53:25.041451338 +0000 UTC m=+237.300762489" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.062778 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" podStartSLOduration=168.062752639 podStartE2EDuration="2m48.062752639s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.061530281 +0000 UTC m=+237.320841432" watchObservedRunningTime="2026-03-20 06:53:25.062752639 +0000 UTC m=+237.322063790" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.078002 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" podStartSLOduration=168.077987939 podStartE2EDuration="2m48.077987939s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.07676551 +0000 UTC m=+237.336076661" watchObservedRunningTime="2026-03-20 06:53:25.077987939 +0000 UTC m=+237.337299080" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.089486 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59440: no serving certificate available for the kubelet" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.094896 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.097078 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.59706067 +0000 UTC m=+237.856371821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.098604 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wnlnd" event={"ID":"e31fd981-67e5-461a-b43c-89a38265e7ed","Type":"ContainerStarted","Data":"a7d39f953abeee5ac104f8980375277b414efea4f416a8bff30af37b4d8c51c6"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.130952 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" podStartSLOduration=168.130928348 podStartE2EDuration="2m48.130928348s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.125664472 +0000 UTC m=+237.384975633" watchObservedRunningTime="2026-03-20 06:53:25.130928348 +0000 UTC m=+237.390239499" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.142164 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" event={"ID":"1939ab6e-c688-43a4-bca6-7cc00e950962","Type":"ContainerStarted","Data":"e83b244ef8edca84b91c240039546d07171084e496028cd867d04d340981a8dd"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.163929 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" event={"ID":"8eccc00e-2821-4e84-9040-6aa1e58daf78","Type":"ContainerStarted","Data":"bc41415d3716cc2fa98d513a3b60344edcee0f515d37ef98cc23fa90327db594"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.164784 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.172929 5136 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xssjv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.172988 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" podUID="8eccc00e-2821-4e84-9040-6aa1e58daf78" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.178368 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" podStartSLOduration=168.178343442 podStartE2EDuration="2m48.178343442s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.173121228 +0000 UTC m=+237.432432379" watchObservedRunningTime="2026-03-20 06:53:25.178343442 +0000 UTC m=+237.437654603" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.196049 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.196833 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.696802663 +0000 UTC m=+237.956113814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.201226 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" event={"ID":"246c7ce4-1953-4a0c-9fed-cabc26f79f3f","Type":"ContainerStarted","Data":"849d112c18adfaccb4466a6f4badc0f57b72985be4ccd37077a0b537003ae902"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.220518 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" event={"ID":"06133c52-727b-4ded-b835-f0f71093b193","Type":"ContainerStarted","Data":"1ceeb933aba0f0a739131b6cf23737870cd7bd2e3ace7afb1399cc116882334e"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.250428 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" event={"ID":"8b148c18-da73-4c17-85f7-454eebfe96f8","Type":"ContainerStarted","Data":"e550a3e89ff736fb6666fbb000a205d11130250397d0ca330e687eab246a4564"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.250895 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.254289 5136 generic.go:334] "Generic (PLEG): container finished" podID="882e7562-0811-4a27-9e79-cae539acc27d" containerID="7fc677eeb46308adb83e089c297439260e9b3e3a9580290e821aa973ad178f55" exitCode=0 Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.254351 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" event={"ID":"882e7562-0811-4a27-9e79-cae539acc27d","Type":"ContainerDied","Data":"7fc677eeb46308adb83e089c297439260e9b3e3a9580290e821aa973ad178f55"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.254846 5136 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2q7k6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:5443/healthz\": dial tcp 10.217.0.44:5443: connect: connection refused" start-of-body= Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.254883 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" podUID="8b148c18-da73-4c17-85f7-454eebfe96f8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.44:5443/healthz\": dial tcp 10.217.0.44:5443: connect: connection refused" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.257900 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" event={"ID":"a1dff0e1-4e1b-49cc-bc54-d157138a2d20","Type":"ContainerStarted","Data":"36881e40ed9dbdf6faaa683ead03eebc8f7c1e93b9589f5f2639a702726803ec"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.262959 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" podStartSLOduration=169.262932688 podStartE2EDuration="2m49.262932688s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.259848551 +0000 UTC m=+237.519159702" watchObservedRunningTime="2026-03-20 06:53:25.262932688 +0000 UTC m=+237.522243839" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.269982 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59442: no serving certificate available for the kubelet" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.274178 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.293963 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" podStartSLOduration=168.293942276 podStartE2EDuration="2m48.293942276s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.293680517 +0000 UTC m=+237.552991668" watchObservedRunningTime="2026-03-20 06:53:25.293942276 +0000 UTC m=+237.553253427" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.297623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.299440 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.799418838 +0000 UTC m=+238.058729999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.323506 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.357730 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" podStartSLOduration=168.357699265 podStartE2EDuration="2m48.357699265s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.345149199 +0000 UTC m=+237.604460350" watchObservedRunningTime="2026-03-20 06:53:25.357699265 +0000 UTC m=+237.617010416" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.381254 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" podStartSLOduration=168.381230027 podStartE2EDuration="2m48.381230027s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.37913066 +0000 UTC m=+237.638441801" watchObservedRunningTime="2026-03-20 06:53:25.381230027 +0000 UTC m=+237.640541178" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.406888 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.407205 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.907189564 +0000 UTC m=+238.166500725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.434066 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" podStartSLOduration=168.433405471 podStartE2EDuration="2m48.433405471s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.423060485 +0000 UTC m=+237.682371646" watchObservedRunningTime="2026-03-20 06:53:25.433405471 +0000 UTC m=+237.692716622" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.510033 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.510505 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.01048856 +0000 UTC m=+238.269799711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.552404 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" podStartSLOduration=168.552388431 podStartE2EDuration="2m48.552388431s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.546594429 +0000 UTC m=+237.805905580" watchObservedRunningTime="2026-03-20 06:53:25.552388431 +0000 UTC m=+237.811699582" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.611268 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.611574 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.111562316 +0000 UTC m=+238.370873467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.616741 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" podStartSLOduration=168.616724078 podStartE2EDuration="2m48.616724078s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.612605869 +0000 UTC m=+237.871917020" watchObservedRunningTime="2026-03-20 06:53:25.616724078 +0000 UTC m=+237.876035229" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.712150 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.712608 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.21258861 +0000 UTC m=+238.471899761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.745773 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" podStartSLOduration=168.745758115 podStartE2EDuration="2m48.745758115s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.744690182 +0000 UTC m=+238.004001333" watchObservedRunningTime="2026-03-20 06:53:25.745758115 +0000 UTC m=+238.005069266" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.813898 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.814529 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.314518393 +0000 UTC m=+238.573829544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.915216 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.915675 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.415655321 +0000 UTC m=+238.674966472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.956077 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59450: no serving certificate available for the kubelet" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.992398 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.992493 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.016696 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.017122 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.517107837 +0000 UTC m=+238.776418988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.119143 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.119404 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.619386551 +0000 UTC m=+238.878697702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.119556 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.119963 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.619952669 +0000 UTC m=+238.879263820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.220192 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.220372 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.720346893 +0000 UTC m=+238.979658044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.266727 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" event={"ID":"ffd3e201-0817-43ed-b8db-d7b526017b69","Type":"ContainerStarted","Data":"de42b813a9b449b3fd59c8b332d93332799fe0e9b37f2f6e98f5edca7f1abfa3"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.269503 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-x4wkf" event={"ID":"b4583d32-b996-4de0-a7a9-3f13086640a2","Type":"ContainerStarted","Data":"9097fc1d229ba4b794548200c6312a0fadfa3054180af9444d9b6211395ba870"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.271736 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" event={"ID":"d0541594-5780-4b00-a3c7-3b132a0cde9b","Type":"ContainerStarted","Data":"197e6791e991fbc1f12d2a1bfac284641926e87d4218fbe44784fef266f7fa91"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.271766 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" event={"ID":"d0541594-5780-4b00-a3c7-3b132a0cde9b","Type":"ContainerStarted","Data":"0e2f28811520b1756f9957490750f23662ecf6ad5743a91eff89c3cace3618ba"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.273598 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" event={"ID":"22cf75b6-1525-436a-9999-96f3b2393a03","Type":"ContainerStarted","Data":"ba85f0916340422f3af0b1d49eaa0156561d698041c34648084eaafc75eb78e9"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.278241 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" event={"ID":"11250cf1-2849-42f6-8a9c-85d673b4b097","Type":"ContainerStarted","Data":"69aa73b9c14529e02678faa87a73fe136d358701fc09e03d905281d5657dc3e2"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.280170 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" event={"ID":"d10c92de-8478-436b-bdc0-0fe231faf35c","Type":"ContainerStarted","Data":"097c9e2b09fac404f71076e2d5b34157d5201f023221a76f6839900a1a11ce82"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.284352 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8mwfm" event={"ID":"f179a691-95b5-4d8a-9f4f-48267b8587a7","Type":"ContainerStarted","Data":"49725466cdca9eb8282305588245949042014deb6aed48d138a2c60381277f42"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.286512 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wnlnd" event={"ID":"e31fd981-67e5-461a-b43c-89a38265e7ed","Type":"ContainerStarted","Data":"3ed8cc7f5a2c13811280e34603442cce191f2b4f7a438aa9482226ab0f77cf21"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.286540 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wnlnd" event={"ID":"e31fd981-67e5-461a-b43c-89a38265e7ed","Type":"ContainerStarted","Data":"1eb72cfbaf35a14e31f7b041ff4416a50cb7d67dfae99a9dae429642a39da5d7"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.286646 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.287863 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" event={"ID":"8eccc00e-2821-4e84-9040-6aa1e58daf78","Type":"ContainerStarted","Data":"3356c91ae440af57cec371047e496cdbb53a694c0305ed1fe1de971c76b6679d"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.288570 5136 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xssjv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.288671 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" podUID="8eccc00e-2821-4e84-9040-6aa1e58daf78" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.291897 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" event={"ID":"6ac92b4e-38e5-4858-8b93-41afb63e9cdd","Type":"ContainerStarted","Data":"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.292353 5136 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jvzhk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.292387 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.293471 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" podStartSLOduration=169.293458278 podStartE2EDuration="2m49.293458278s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.290131023 +0000 UTC m=+238.549442174" watchObservedRunningTime="2026-03-20 06:53:26.293458278 +0000 UTC m=+238.552769429" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.293593 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerStarted","Data":"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.293785 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.295387 5136 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mbfm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.295431 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.297615 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" event={"ID":"c87c53d2-e35b-43e3-910e-852b635c46b8","Type":"ContainerStarted","Data":"d3b38a9cf0f247ed5baf85944c58830195e8acdcea08686f6dfc0e85d2046aa9"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.309014 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" event={"ID":"f3343084-9f31-46fb-8514-b5391882700a","Type":"ContainerStarted","Data":"bf4db6f60271831db831751595237f8c2e2b9b91d20c2d33a55580bd7e1a983f"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.309060 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" event={"ID":"f3343084-9f31-46fb-8514-b5391882700a","Type":"ContainerStarted","Data":"afc3c45a18f0d7a0ed66d1251fe5653fdfdf6a0a4f25ce210fb40e3d9de6086f"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.309681 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.319613 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vwn87" event={"ID":"22df33b0-12a4-40ed-b739-85240eb615e7","Type":"ContainerStarted","Data":"2e0466445eaab41d677f8725ec8a908d975a4e9a68bb54543470286a58641578"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.322097 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.325014 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.824999171 +0000 UTC m=+239.084310322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.334350 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" event={"ID":"06133c52-727b-4ded-b835-f0f71093b193","Type":"ContainerStarted","Data":"3d2eb4167ccecde174236c9b509a7a5356f0051d54cc3bd4b2afa69c2aa73612"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.334393 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" event={"ID":"06133c52-727b-4ded-b835-f0f71093b193","Type":"ContainerStarted","Data":"ce09a0e40142a205a97a81d2852860aae8c37bea9efb89833d948a81ae6e385f"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.335626 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" podStartSLOduration=169.335612946 podStartE2EDuration="2m49.335612946s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.326331354 +0000 UTC m=+238.585642505" watchObservedRunningTime="2026-03-20 06:53:26.335612946 +0000 UTC m=+238.594924097" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.336160 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wnlnd" podStartSLOduration=7.336152253 podStartE2EDuration="7.336152253s" podCreationTimestamp="2026-03-20 06:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.305353822 +0000 UTC m=+238.564664973" watchObservedRunningTime="2026-03-20 06:53:26.336152253 +0000 UTC m=+238.595463404" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.342641 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" event={"ID":"1939ab6e-c688-43a4-bca6-7cc00e950962","Type":"ContainerStarted","Data":"503affd5af773d5a3fd2f8ae26ab7e2e5e115d2eb468df2924f388f2f072a25d"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.357588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" event={"ID":"62c9b093-fe6a-4484-844b-31bbb4f6b21a","Type":"ContainerStarted","Data":"c40a893b7fdd886014f148ea246672b8196fd290fa3b16d9dfb39bd1b0417dce"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.362424 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" event={"ID":"dd410106-c7b7-4706-9b99-38e3597ee713","Type":"ContainerStarted","Data":"03ce6707867516f8e304a9f71c030446299ebddf31e3aeaaecd43d7f6b52ae1f"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.368381 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.376148 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.407568 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" podStartSLOduration=169.407551203 podStartE2EDuration="2m49.407551203s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.406242762 +0000 UTC m=+238.665553913" watchObservedRunningTime="2026-03-20 06:53:26.407551203 +0000 UTC m=+238.666862354" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.408271 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" podStartSLOduration=169.408267896 podStartE2EDuration="2m49.408267896s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.375775491 +0000 UTC m=+238.635086672" watchObservedRunningTime="2026-03-20 06:53:26.408267896 +0000 UTC m=+238.667579047" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.427213 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.427601 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.927586065 +0000 UTC m=+239.186897216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.447832 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vwn87" podStartSLOduration=7.447799562 podStartE2EDuration="7.447799562s" podCreationTimestamp="2026-03-20 06:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.427778061 +0000 UTC m=+238.687089212" watchObservedRunningTime="2026-03-20 06:53:26.447799562 +0000 UTC m=+238.707110713" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.475022 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" podStartSLOduration=169.475000259 podStartE2EDuration="2m49.475000259s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.451181668 +0000 UTC m=+238.710492819" watchObservedRunningTime="2026-03-20 06:53:26.475000259 +0000 UTC m=+238.734311410" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.512920 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.518494 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" podStartSLOduration=169.518478779 podStartE2EDuration="2m49.518478779s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.518051876 +0000 UTC m=+238.777363027" watchObservedRunningTime="2026-03-20 06:53:26.518478779 +0000 UTC m=+238.777789930" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.531457 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.540744 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.0407287 +0000 UTC m=+239.300039941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.602907 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" podStartSLOduration=169.60289137 podStartE2EDuration="2m49.60289137s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.559031408 +0000 UTC m=+238.818342559" watchObservedRunningTime="2026-03-20 06:53:26.60289137 +0000 UTC m=+238.862202521" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.633985 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.634117 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.134089393 +0000 UTC m=+239.393400544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.645462 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.645830 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.145803633 +0000 UTC m=+239.405114784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.747730 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.747927 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.24789686 +0000 UTC m=+239.507208011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.748190 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.748608 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.248598182 +0000 UTC m=+239.507909333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.849846 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.850041 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.350013939 +0000 UTC m=+239.609325090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.850112 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.850433 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.350424422 +0000 UTC m=+239.609735573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.950986 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.951331 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.451313531 +0000 UTC m=+239.710624682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.997368 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:26 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:26 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:26 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.997428 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.050171 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.052684 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.053071 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.553055327 +0000 UTC m=+239.812366468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.153982 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.154359 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.65434051 +0000 UTC m=+239.913651661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.255388 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.255713 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.755696764 +0000 UTC m=+240.015007905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.282335 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59464: no serving certificate available for the kubelet" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.356409 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.356578 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.856543263 +0000 UTC m=+240.115854424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.356702 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.357118 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.857108921 +0000 UTC m=+240.116420162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.369673 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" event={"ID":"882e7562-0811-4a27-9e79-cae539acc27d","Type":"ContainerStarted","Data":"b4ca4391ad090959d17b15512acbc469c9bf5ade364d5f25b05995c49fe2a254"} Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.374422 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" event={"ID":"11250cf1-2849-42f6-8a9c-85d673b4b097","Type":"ContainerStarted","Data":"b051bc02d93acd983a645fe07f1756f87ee9c08737c671a69f6cc0f73d15e4a8"} Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.374920 5136 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mbfm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.374956 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.396624 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.406359 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" podStartSLOduration=170.406342032 podStartE2EDuration="2m50.406342032s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:27.401225311 +0000 UTC m=+239.660536462" watchObservedRunningTime="2026-03-20 06:53:27.406342032 +0000 UTC m=+239.665653183" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.407447 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.423560 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.428701 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" podStartSLOduration=171.428690577 podStartE2EDuration="2m51.428690577s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:27.426589501 +0000 UTC m=+239.685900652" watchObservedRunningTime="2026-03-20 06:53:27.428690577 +0000 UTC m=+239.688001728" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.457828 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.459082 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.959068204 +0000 UTC m=+240.218379355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.529950 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.562607 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.562989 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.062978169 +0000 UTC m=+240.322289320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.663703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.672085 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.172059687 +0000 UTC m=+240.431370838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.766223 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.766658 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.266640819 +0000 UTC m=+240.525951970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.866945 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.867307 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.367290961 +0000 UTC m=+240.626602112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.968895 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.969169 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.469159231 +0000 UTC m=+240.728470382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.995615 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:27 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:27 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:27 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.995668 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.059847 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.069353 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.069629 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.569616527 +0000 UTC m=+240.828927678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.171451 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.171926 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.671909591 +0000 UTC m=+240.931220742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.272902 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.273352 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.773332288 +0000 UTC m=+241.032643439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.375004 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.375926 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.87590465 +0000 UTC m=+241.135215881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.394934 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" event={"ID":"c87c53d2-e35b-43e3-910e-852b635c46b8","Type":"ContainerStarted","Data":"e00eea1e1296704aa22ae062a966e757810f9611d6ea36d0a732f0f106a5ccfb"} Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.394974 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" event={"ID":"c87c53d2-e35b-43e3-910e-852b635c46b8","Type":"ContainerStarted","Data":"85d5e92076ab7de5105e64e5d492ed95d0daf0ce775744ec0b6f965c57817c55"} Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.431030 5136 generic.go:334] "Generic (PLEG): container finished" podID="d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" containerID="f960ca2f1291c6810939c91cb385274efd1428e9971de1fcce80d392e52b2a36" exitCode=0 Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.434324 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerName="route-controller-manager" containerID="cri-o://3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8" gracePeriod=30 Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.434638 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" event={"ID":"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8","Type":"ContainerDied","Data":"f960ca2f1291c6810939c91cb385274efd1428e9971de1fcce80d392e52b2a36"} Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.475915 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.476680 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.976663916 +0000 UTC m=+241.235975067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.529587 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.579931 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.579947 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.580214 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:29.080200579 +0000 UTC m=+241.339511730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.581537 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.592740 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.606681 5136 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.659348 5136 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-20T06:53:28.606705824Z","Handler":null,"Name":""} Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.672540 5136 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.672588 5136 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.683338 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.698495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.733702 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.734688 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.740951 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.751059 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.785130 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.785216 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs57w\" (UniqueName: \"kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.785286 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.785329 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.789551 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.789590 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.853694 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.888867 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.888930 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.888961 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.888990 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs57w\" (UniqueName: \"kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.889033 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqscr\" (UniqueName: \"kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.889055 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.889970 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.890210 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.922201 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.941744 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.944522 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs57w\" (UniqueName: \"kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.947825 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.949907 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.954461 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.956867 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.990334 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqscr\" (UniqueName: \"kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.990508 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.990613 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24rh4\" (UniqueName: \"kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.990718 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.990901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.991019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.992101 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.992475 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.002525 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:29 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:29 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:29 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.003667 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.045758 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqscr\" (UniqueName: \"kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.049473 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.058766 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.091838 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sxkv\" (UniqueName: \"kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv\") pod \"edd610c6-14f6-4da1-83ab-b816dac3ed91\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092199 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config\") pod \"edd610c6-14f6-4da1-83ab-b816dac3ed91\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092298 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert\") pod \"edd610c6-14f6-4da1-83ab-b816dac3ed91\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092321 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca\") pod \"edd610c6-14f6-4da1-83ab-b816dac3ed91\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092447 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092468 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24rh4\" (UniqueName: \"kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092490 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092991 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.094498 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca" (OuterVolumeSpecName: "client-ca") pod "edd610c6-14f6-4da1-83ab-b816dac3ed91" (UID: "edd610c6-14f6-4da1-83ab-b816dac3ed91"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.094931 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config" (OuterVolumeSpecName: "config") pod "edd610c6-14f6-4da1-83ab-b816dac3ed91" (UID: "edd610c6-14f6-4da1-83ab-b816dac3ed91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.096145 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv" (OuterVolumeSpecName: "kube-api-access-6sxkv") pod "edd610c6-14f6-4da1-83ab-b816dac3ed91" (UID: "edd610c6-14f6-4da1-83ab-b816dac3ed91"). InnerVolumeSpecName "kube-api-access-6sxkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.096412 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.101350 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "edd610c6-14f6-4da1-83ab-b816dac3ed91" (UID: "edd610c6-14f6-4da1-83ab-b816dac3ed91"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.127586 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24rh4\" (UniqueName: \"kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.135940 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:53:29 crc kubenswrapper[5136]: E0320 06:53:29.136201 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerName="route-controller-manager" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.136212 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerName="route-controller-manager" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.136309 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerName="route-controller-manager" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.137474 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.145696 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.193650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpmf4\" (UniqueName: \"kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.193768 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.193805 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.208048 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.208110 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.208126 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sxkv\" (UniqueName: \"kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.208147 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.312432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.312532 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpmf4\" (UniqueName: \"kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.312590 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.313035 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.313066 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.320188 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.332566 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpmf4\" (UniqueName: \"kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.465386 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.495914 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" event={"ID":"c87c53d2-e35b-43e3-910e-852b635c46b8","Type":"ContainerStarted","Data":"7eef52a069f0d052b8d42c0b5b34ef56aa6a9c48d0b6b389711f7c437f53aa39"} Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.500379 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.501022 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.540897 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.573299 5136 generic.go:334] "Generic (PLEG): container finished" podID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerID="3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8" exitCode=0 Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.573597 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.581663 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" podStartSLOduration=10.581648212 podStartE2EDuration="10.581648212s" podCreationTimestamp="2026-03-20 06:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:29.581202868 +0000 UTC m=+241.840514019" watchObservedRunningTime="2026-03-20 06:53:29.581648212 +0000 UTC m=+241.840959363" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.582029 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" event={"ID":"edd610c6-14f6-4da1-83ab-b816dac3ed91","Type":"ContainerDied","Data":"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8"} Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.582079 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" event={"ID":"edd610c6-14f6-4da1-83ab-b816dac3ed91","Type":"ContainerDied","Data":"6e4d56e84d0e4688ac8677a97bc75a219910aeea20b1d12dc228013a436922f3"} Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.582282 5136 scope.go:117] "RemoveContainer" containerID="3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.583123 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" containerID="cri-o://353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae" gracePeriod=30 Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.626383 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.636000 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.636051 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.636164 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6xs8\" (UniqueName: \"kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.636196 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.654513 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.672385 5136 scope.go:117] "RemoveContainer" containerID="3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8" Mar 20 06:53:29 crc kubenswrapper[5136]: E0320 06:53:29.675966 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8\": container with ID starting with 3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8 not found: ID does not exist" containerID="3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.676004 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8"} err="failed to get container status \"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8\": rpc error: code = NotFound desc = could not find container \"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8\": container with ID starting with 3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8 not found: ID does not exist" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.703186 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.708877 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:29 crc kubenswrapper[5136]: W0320 06:53:29.710862 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16ee2b48_5dea_48c6_888a_ae52ff44afa4.slice/crio-2150aeb737b2b0e0b3d8cc086d915c665b772b5d543fe941e2ca6a177c62b012 WatchSource:0}: Error finding container 2150aeb737b2b0e0b3d8cc086d915c665b772b5d543fe941e2ca6a177c62b012: Status 404 returned error can't find the container with id 2150aeb737b2b0e0b3d8cc086d915c665b772b5d543fe941e2ca6a177c62b012 Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.711021 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.739392 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6xs8\" (UniqueName: \"kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.739447 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.739471 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.739492 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.740286 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.741342 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.747288 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.771265 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6xs8\" (UniqueName: \"kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.858471 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.880595 5136 ???:1] "http: TLS handshake error from 192.168.126.11:48490: no serving certificate available for the kubelet" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.917586 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: W0320 06:53:29.927326 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5f9659e_73fb_4389_8d6e_b739dfa94d4b.slice/crio-e01d0fc1f5e7b135553c58dae30642386c72820cdb0d76abc481f734a7517ae7 WatchSource:0}: Error finding container e01d0fc1f5e7b135553c58dae30642386c72820cdb0d76abc481f734a7517ae7: Status 404 returned error can't find the container with id e01d0fc1f5e7b135553c58dae30642386c72820cdb0d76abc481f734a7517ae7 Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.985550 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.994838 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:29 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:29 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:29 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.994889 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.058939 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.148366 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mdvw\" (UniqueName: \"kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw\") pod \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.148636 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") pod \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.148688 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume\") pod \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.149422 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" (UID: "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.153935 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" (UID: "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.154078 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw" (OuterVolumeSpecName: "kube-api-access-4mdvw") pod "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" (UID: "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8"). InnerVolumeSpecName "kube-api-access-4mdvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.154339 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.201151 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250394 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert\") pod \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250514 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config\") pod \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250546 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gglvk\" (UniqueName: \"kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk\") pod \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250577 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles\") pod \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250602 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca\") pod \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250865 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mdvw\" (UniqueName: \"kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250877 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250895 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: W0320 06:53:30.250970 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod444d6afe_1b85_4b31_92c1_06272dd19195.slice/crio-a5bf747a5723dcdad0ee90a2f6a57718884edcf9a88fe8b389b16d15fd63d2ec WatchSource:0}: Error finding container a5bf747a5723dcdad0ee90a2f6a57718884edcf9a88fe8b389b16d15fd63d2ec: Status 404 returned error can't find the container with id a5bf747a5723dcdad0ee90a2f6a57718884edcf9a88fe8b389b16d15fd63d2ec Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.251243 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6ac92b4e-38e5-4858-8b93-41afb63e9cdd" (UID: "6ac92b4e-38e5-4858-8b93-41afb63e9cdd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.251359 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ac92b4e-38e5-4858-8b93-41afb63e9cdd" (UID: "6ac92b4e-38e5-4858-8b93-41afb63e9cdd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.251400 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config" (OuterVolumeSpecName: "config") pod "6ac92b4e-38e5-4858-8b93-41afb63e9cdd" (UID: "6ac92b4e-38e5-4858-8b93-41afb63e9cdd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.254728 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk" (OuterVolumeSpecName: "kube-api-access-gglvk") pod "6ac92b4e-38e5-4858-8b93-41afb63e9cdd" (UID: "6ac92b4e-38e5-4858-8b93-41afb63e9cdd"). InnerVolumeSpecName "kube-api-access-gglvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.255580 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ac92b4e-38e5-4858-8b93-41afb63e9cdd" (UID: "6ac92b4e-38e5-4858-8b93-41afb63e9cdd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.352359 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.352686 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gglvk\" (UniqueName: \"kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.352697 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.352707 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.352715 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.406291 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.406981 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" path="/var/lib/kubelet/pods/edd610c6-14f6-4da1-83ab-b816dac3ed91/volumes" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.583011 5136 generic.go:334] "Generic (PLEG): container finished" podID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerID="353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae" exitCode=0 Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.583090 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" event={"ID":"6ac92b4e-38e5-4858-8b93-41afb63e9cdd","Type":"ContainerDied","Data":"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.583127 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" event={"ID":"6ac92b4e-38e5-4858-8b93-41afb63e9cdd","Type":"ContainerDied","Data":"702c928592059046850fb0c9bb71fb8788e55d41bf051a0e0c5227c4a0538c5b"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.583152 5136 scope.go:117] "RemoveContainer" containerID="353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.583711 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.585108 5136 generic.go:334] "Generic (PLEG): container finished" podID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerID="ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958" exitCode=0 Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.585177 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerDied","Data":"ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.585194 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerStarted","Data":"3e121a671baa07140a3d1cad1e8e105a436e8a55fb9911545361353494c2ebed"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.590922 5136 generic.go:334] "Generic (PLEG): container finished" podID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerID="eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5" exitCode=0 Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.591082 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerDied","Data":"eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.591166 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerStarted","Data":"a72e48c3682399c05912ec6fcf4bd3347709282c92c6d1cf4cee81749234bee6"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.595357 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" event={"ID":"16ee2b48-5dea-48c6-888a-ae52ff44afa4","Type":"ContainerStarted","Data":"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.595407 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" event={"ID":"16ee2b48-5dea-48c6-888a-ae52ff44afa4","Type":"ContainerStarted","Data":"2150aeb737b2b0e0b3d8cc086d915c665b772b5d543fe941e2ca6a177c62b012"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.595434 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.601133 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" event={"ID":"444d6afe-1b85-4b31-92c1-06272dd19195","Type":"ContainerStarted","Data":"f95f9eff44362264182d5f665ca8828c4224a53df30d9a375438216e49570081"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.601170 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" event={"ID":"444d6afe-1b85-4b31-92c1-06272dd19195","Type":"ContainerStarted","Data":"a5bf747a5723dcdad0ee90a2f6a57718884edcf9a88fe8b389b16d15fd63d2ec"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.602790 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.608282 5136 scope.go:117] "RemoveContainer" containerID="353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae" Mar 20 06:53:30 crc kubenswrapper[5136]: E0320 06:53:30.608923 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae\": container with ID starting with 353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae not found: ID does not exist" containerID="353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.608975 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae"} err="failed to get container status \"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae\": rpc error: code = NotFound desc = could not find container \"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae\": container with ID starting with 353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae not found: ID does not exist" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.610820 5136 generic.go:334] "Generic (PLEG): container finished" podID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerID="cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df" exitCode=0 Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.610865 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerDied","Data":"cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.610903 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerStarted","Data":"e01d0fc1f5e7b135553c58dae30642386c72820cdb0d76abc481f734a7517ae7"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.613343 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" event={"ID":"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8","Type":"ContainerDied","Data":"07da4e107a7ee6b904be95db1fd6b4beceb4d8ed54972900d21d82ae0100b768"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.613373 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07da4e107a7ee6b904be95db1fd6b4beceb4d8ed54972900d21d82ae0100b768" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.613434 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.617767 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerID="9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3" exitCode=0 Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.617924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerDied","Data":"9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.617955 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerStarted","Data":"0fb7591931908d54cba79b40ec6231538415c3ba45eb54605b5ec4b5dd387ac9"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.621586 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.623056 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.657971 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" podStartSLOduration=173.657925354 podStartE2EDuration="2m53.657925354s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:30.653568826 +0000 UTC m=+242.912879977" watchObservedRunningTime="2026-03-20 06:53:30.657925354 +0000 UTC m=+242.917236505" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.688784 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" podStartSLOduration=2.688766346 podStartE2EDuration="2.688766346s" podCreationTimestamp="2026-03-20 06:53:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:30.670981435 +0000 UTC m=+242.930292586" watchObservedRunningTime="2026-03-20 06:53:30.688766346 +0000 UTC m=+242.948077497" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.720382 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:53:30 crc kubenswrapper[5136]: E0320 06:53:30.720600 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" containerName="collect-profiles" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.720615 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" containerName="collect-profiles" Mar 20 06:53:30 crc kubenswrapper[5136]: E0320 06:53:30.720631 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.720639 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.720733 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" containerName="collect-profiles" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.720747 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.721411 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.724071 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.732100 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.758599 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.758846 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvxm\" (UniqueName: \"kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.758951 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.860099 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvxm\" (UniqueName: \"kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.860178 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.860311 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.860646 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.860946 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.879583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvxm\" (UniqueName: \"kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.949094 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.950280 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.952992 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.954894 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.954975 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.003278 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.005612 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:31 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:31 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:31 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.005663 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.038137 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.053976 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.054604 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.059517 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.059645 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.071165 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.103823 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.103858 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.124260 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.125952 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.129946 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205076 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205135 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205161 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205185 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205201 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205355 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205401 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205536 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xph6j\" (UniqueName: \"kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.235583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307170 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307269 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xph6j\" (UniqueName: \"kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307311 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307360 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307378 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307704 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.308281 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.312008 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.313871 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.324125 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.334509 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xph6j\" (UniqueName: \"kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.403791 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.441248 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.491794 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.498724 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.499336 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.506416 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.506700 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.506786 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.506802 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.507100 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.507577 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.511844 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.532612 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.612079 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.612336 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.612424 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.612471 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qpmj\" (UniqueName: \"kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.612497 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.714052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.714126 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qpmj\" (UniqueName: \"kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.714148 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.714221 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.715367 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.720499 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.720998 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.721080 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.721730 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.722889 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.727000 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.737352 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.738410 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qpmj\" (UniqueName: \"kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.746235 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.747802 5136 patch_prober.go:28] interesting pod/downloads-7954f5f757-djxmj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.747891 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-djxmj" podUID="5491b0c6-578a-430a-82db-943e9c7778e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.755241 5136 patch_prober.go:28] interesting pod/downloads-7954f5f757-djxmj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.755303 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-djxmj" podUID="5491b0c6-578a-430a-82db-943e9c7778e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.780948 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.781011 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.783185 5136 patch_prober.go:28] interesting pod/console-f9d7485db-bjqjp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.783241 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bjqjp" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.816988 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9jhq\" (UniqueName: \"kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.817336 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.817466 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.818144 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.919274 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9jhq\" (UniqueName: \"kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.919381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.919415 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.920331 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.920611 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.953469 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9jhq\" (UniqueName: \"kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.993741 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:31 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:31 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:31 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.993792 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.056319 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.120784 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.122242 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.131153 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.222790 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.222872 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8zcs\" (UniqueName: \"kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.222982 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.255557 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.255607 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.261878 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.265379 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.265406 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.272712 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.324165 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.324270 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.324316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8zcs\" (UniqueName: \"kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.325616 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.325715 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.346968 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8zcs\" (UniqueName: \"kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.417853 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" path="/var/lib/kubelet/pods/6ac92b4e-38e5-4858-8b93-41afb63e9cdd/volumes" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.478205 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.649637 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.650875 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.850432 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.991328 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.993676 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:32 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:32 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:32 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.993986 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:33 crc kubenswrapper[5136]: I0320 06:53:33.994138 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:33 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:33 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:33 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:33 crc kubenswrapper[5136]: I0320 06:53:33.994481 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:34 crc kubenswrapper[5136]: I0320 06:53:34.993731 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:34 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:34 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:34 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:34 crc kubenswrapper[5136]: I0320 06:53:34.993785 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:35 crc kubenswrapper[5136]: I0320 06:53:35.033445 5136 ???:1] "http: TLS handshake error from 192.168.126.11:48498: no serving certificate available for the kubelet" Mar 20 06:53:35 crc kubenswrapper[5136]: I0320 06:53:35.993312 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:35 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:35 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:35 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:35 crc kubenswrapper[5136]: I0320 06:53:35.993384 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.228047 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.229680 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.244185 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.318361 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.327315 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.994241 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.996111 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:37 crc kubenswrapper[5136]: I0320 06:53:37.136298 5136 ???:1] "http: TLS handshake error from 192.168.126.11:48504: no serving certificate available for the kubelet" Mar 20 06:53:38 crc kubenswrapper[5136]: I0320 06:53:38.000787 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:41 crc kubenswrapper[5136]: I0320 06:53:41.753095 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:41 crc kubenswrapper[5136]: I0320 06:53:41.790426 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:41 crc kubenswrapper[5136]: I0320 06:53:41.796614 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:42 crc kubenswrapper[5136]: W0320 06:53:42.434094 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod301e1f09_ed9b_4d2f_ae95_c098e8ae4dd5.slice/crio-20ff1b017eb3949453a5c8e4e818cc934899998a077617b8ba9ac7f5655008d9 WatchSource:0}: Error finding container 20ff1b017eb3949453a5c8e4e818cc934899998a077617b8ba9ac7f5655008d9: Status 404 returned error can't find the container with id 20ff1b017eb3949453a5c8e4e818cc934899998a077617b8ba9ac7f5655008d9 Mar 20 06:53:42 crc kubenswrapper[5136]: I0320 06:53:42.716383 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerStarted","Data":"20ff1b017eb3949453a5c8e4e818cc934899998a077617b8ba9ac7f5655008d9"} Mar 20 06:53:42 crc kubenswrapper[5136]: I0320 06:53:42.859080 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 20 06:53:45 crc kubenswrapper[5136]: I0320 06:53:45.309153 5136 ???:1] "http: TLS handshake error from 192.168.126.11:45064: no serving certificate available for the kubelet" Mar 20 06:53:45 crc kubenswrapper[5136]: E0320 06:53:45.630564 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 20 06:53:45 crc kubenswrapper[5136]: E0320 06:53:45.631369 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:53:45 crc kubenswrapper[5136]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 20 06:53:45 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjzr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29566492-9gbqz_openshift-infra(760c854a-7b9d-4582-9bcc-faf077008e0f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 20 06:53:45 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:53:45 crc kubenswrapper[5136]: E0320 06:53:45.633126 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" Mar 20 06:53:45 crc kubenswrapper[5136]: E0320 06:53:45.742487 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" Mar 20 06:53:45 crc kubenswrapper[5136]: I0320 06:53:45.822522 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:53:45 crc kubenswrapper[5136]: I0320 06:53:45.822583 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:53:46 crc kubenswrapper[5136]: I0320 06:53:46.186970 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:53:46 crc kubenswrapper[5136]: I0320 06:53:46.694149 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:46 crc kubenswrapper[5136]: I0320 06:53:46.728282 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:46 crc kubenswrapper[5136]: I0320 06:53:46.728503 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" containerName="route-controller-manager" containerID="cri-o://f95f9eff44362264182d5f665ca8828c4224a53df30d9a375438216e49570081" gracePeriod=30 Mar 20 06:53:47 crc kubenswrapper[5136]: I0320 06:53:47.749672 5136 generic.go:334] "Generic (PLEG): container finished" podID="444d6afe-1b85-4b31-92c1-06272dd19195" containerID="f95f9eff44362264182d5f665ca8828c4224a53df30d9a375438216e49570081" exitCode=0 Mar 20 06:53:47 crc kubenswrapper[5136]: I0320 06:53:47.749781 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" event={"ID":"444d6afe-1b85-4b31-92c1-06272dd19195","Type":"ContainerDied","Data":"f95f9eff44362264182d5f665ca8828c4224a53df30d9a375438216e49570081"} Mar 20 06:53:48 crc kubenswrapper[5136]: I0320 06:53:48.949518 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:49 crc kubenswrapper[5136]: I0320 06:53:49.918647 5136 patch_prober.go:28] interesting pod/route-controller-manager-859847c56f-k7qx9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" start-of-body= Mar 20 06:53:49 crc kubenswrapper[5136]: I0320 06:53:49.919128 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" Mar 20 06:53:51 crc kubenswrapper[5136]: I0320 06:53:51.774583 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"016ed817-0956-4149-b109-fbdbd9534b4f","Type":"ContainerStarted","Data":"63159ed80d5b5e8e1081e09634c709298d8870dc48598c6ff1bd3d48d579726f"} Mar 20 06:53:53 crc kubenswrapper[5136]: E0320 06:53:53.152877 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 20 06:53:53 crc kubenswrapper[5136]: E0320 06:53:53.153030 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpmf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tk985_openshift-marketplace(0ecf0c0d-35e3-402c-ac3a-60bb2686de5e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 06:53:53 crc kubenswrapper[5136]: E0320 06:53:53.154522 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tk985" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.606513 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tk985" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.689728 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.690490 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqscr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hjck6_openshift-marketplace(899bb83b-4a95-49e5-8e8f-50c309b5d5e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.692776 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hjck6" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.700095 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.700243 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs57w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gnspw_openshift-marketplace(8a3a1d9c-1870-4a43-95fb-6d07e5619acb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.703547 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gnspw" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.719069 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.719126 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.719307 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24rh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5cc6n_openshift-marketplace(b5f9659e-73fb-4389-8d6e-b739dfa94d4b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.720890 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5cc6n" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.763347 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.763903 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" containerName="route-controller-manager" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.763914 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" containerName="route-controller-manager" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.764038 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" containerName="route-controller-manager" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.764428 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.770157 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.789521 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.789651 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" event={"ID":"444d6afe-1b85-4b31-92c1-06272dd19195","Type":"ContainerDied","Data":"a5bf747a5723dcdad0ee90a2f6a57718884edcf9a88fe8b389b16d15fd63d2ec"} Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.789687 5136 scope.go:117] "RemoveContainer" containerID="f95f9eff44362264182d5f665ca8828c4224a53df30d9a375438216e49570081" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.795549 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5cc6n" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.795622 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gnspw" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.804225 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca\") pod \"444d6afe-1b85-4b31-92c1-06272dd19195\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.804307 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert\") pod \"444d6afe-1b85-4b31-92c1-06272dd19195\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.804379 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config\") pod \"444d6afe-1b85-4b31-92c1-06272dd19195\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.804445 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6xs8\" (UniqueName: \"kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8\") pod \"444d6afe-1b85-4b31-92c1-06272dd19195\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.805153 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca" (OuterVolumeSpecName: "client-ca") pod "444d6afe-1b85-4b31-92c1-06272dd19195" (UID: "444d6afe-1b85-4b31-92c1-06272dd19195"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.806287 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config" (OuterVolumeSpecName: "config") pod "444d6afe-1b85-4b31-92c1-06272dd19195" (UID: "444d6afe-1b85-4b31-92c1-06272dd19195"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.816241 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hjck6" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.823202 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "444d6afe-1b85-4b31-92c1-06272dd19195" (UID: "444d6afe-1b85-4b31-92c1-06272dd19195"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.827653 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8" (OuterVolumeSpecName: "kube-api-access-z6xs8") pod "444d6afe-1b85-4b31-92c1-06272dd19195" (UID: "444d6afe-1b85-4b31-92c1-06272dd19195"). InnerVolumeSpecName "kube-api-access-z6xs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.905585 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sf5v\" (UniqueName: \"kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.905937 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.905961 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.905998 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.906079 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.906091 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.906102 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6xs8\" (UniqueName: \"kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.906111 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.006771 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sf5v\" (UniqueName: \"kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.006989 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.007019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.007054 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.008387 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.008603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.013304 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.021337 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sf5v\" (UniqueName: \"kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.102928 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.109070 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:55 crc kubenswrapper[5136]: W0320 06:53:55.123629 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b7a79e9_d592_4c23_bde5_3fa7250e3c2d.slice/crio-b6a47ad0b8d3083dba91f0461003306d2a38b418ea73efce3c166ddfb80853b4 WatchSource:0}: Error finding container b6a47ad0b8d3083dba91f0461003306d2a38b418ea73efce3c166ddfb80853b4: Status 404 returned error can't find the container with id b6a47ad0b8d3083dba91f0461003306d2a38b418ea73efce3c166ddfb80853b4 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.129156 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.133498 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.223921 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jz6hg"] Mar 20 06:53:55 crc kubenswrapper[5136]: W0320 06:53:55.227095 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cf24582_9ee1_4a25_9293_b116d55e6465.slice/crio-e97be04ec45ad4ff430feb5809767da522725df9a1e53411e7387d687914904f WatchSource:0}: Error finding container e97be04ec45ad4ff430feb5809767da522725df9a1e53411e7387d687914904f: Status 404 returned error can't find the container with id e97be04ec45ad4ff430feb5809767da522725df9a1e53411e7387d687914904f Mar 20 06:53:55 crc kubenswrapper[5136]: W0320 06:53:55.236876 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5572feb_df7d_4f3a_9b83_3be3de943668.slice/crio-04630d157ff678919ea02ec74f06fe0c149c30be389654ed54b03feabe7962f1 WatchSource:0}: Error finding container 04630d157ff678919ea02ec74f06fe0c149c30be389654ed54b03feabe7962f1: Status 404 returned error can't find the container with id 04630d157ff678919ea02ec74f06fe0c149c30be389654ed54b03feabe7962f1 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.236956 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.245553 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.247828 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:53:55 crc kubenswrapper[5136]: W0320 06:53:55.249969 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc390cc35_103e_4376_a377_789d27e92301.slice/crio-6945d5a08a47f91726889cbb1087073d5ea5c7ff5da1680cfa09cb30c8ba3897 WatchSource:0}: Error finding container 6945d5a08a47f91726889cbb1087073d5ea5c7ff5da1680cfa09cb30c8ba3897: Status 404 returned error can't find the container with id 6945d5a08a47f91726889cbb1087073d5ea5c7ff5da1680cfa09cb30c8ba3897 Mar 20 06:53:55 crc kubenswrapper[5136]: W0320 06:53:55.253223 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff9e0ea6_add4_4087_83a6_f8d85588d6f2.slice/crio-b157f4d8fc06233ee1c508206beda933efb32ecebe1735837a9ebedb70d95894 WatchSource:0}: Error finding container b157f4d8fc06233ee1c508206beda933efb32ecebe1735837a9ebedb70d95894: Status 404 returned error can't find the container with id b157f4d8fc06233ee1c508206beda933efb32ecebe1735837a9ebedb70d95894 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.300170 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.317624 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.796274 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerID="0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675" exitCode=0 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.796490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerDied","Data":"0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.796668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerStarted","Data":"b157f4d8fc06233ee1c508206beda933efb32ecebe1735837a9ebedb70d95894"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.801629 5136 generic.go:334] "Generic (PLEG): container finished" podID="016ed817-0956-4149-b109-fbdbd9534b4f" containerID="30b021412f2df604ec6c72463282d2652e007ea16824c3bea15b401cdb18360f" exitCode=0 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.801725 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"016ed817-0956-4149-b109-fbdbd9534b4f","Type":"ContainerDied","Data":"30b021412f2df604ec6c72463282d2652e007ea16824c3bea15b401cdb18360f"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.805681 5136 generic.go:334] "Generic (PLEG): container finished" podID="c390cc35-103e-4376-a377-789d27e92301" containerID="881e29f20587338b4b26358412017a32346fc442b6e12fb3a66b43ae0eca1b2a" exitCode=0 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.805896 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerDied","Data":"881e29f20587338b4b26358412017a32346fc442b6e12fb3a66b43ae0eca1b2a"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.806176 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerStarted","Data":"6945d5a08a47f91726889cbb1087073d5ea5c7ff5da1680cfa09cb30c8ba3897"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.807904 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" event={"ID":"b5572feb-df7d-4f3a-9b83-3be3de943668","Type":"ContainerStarted","Data":"f71c6d1d6f78cf523da112838eb4ebd8e2e0a0c9573fd81b271bc1fe978065fc"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.807950 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" event={"ID":"b5572feb-df7d-4f3a-9b83-3be3de943668","Type":"ContainerStarted","Data":"04630d157ff678919ea02ec74f06fe0c149c30be389654ed54b03feabe7962f1"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.823737 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" event={"ID":"433e77aa-fe22-43d7-87ed-0a9219b61762","Type":"ContainerStarted","Data":"0fb2ad703d57143cc353d8d80b57c7e9e8375a9fe5053d2f01f256b52277bbbb"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.823797 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" event={"ID":"433e77aa-fe22-43d7-87ed-0a9219b61762","Type":"ContainerStarted","Data":"dd09161821b366f7bf1ba043da9d6862a2982103d6804207a3cd3950019ad2ae"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.823934 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.841855 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" event={"ID":"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d","Type":"ContainerStarted","Data":"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.841897 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" event={"ID":"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d","Type":"ContainerStarted","Data":"b6a47ad0b8d3083dba91f0461003306d2a38b418ea73efce3c166ddfb80853b4"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.842005 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerName="controller-manager" containerID="cri-o://c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41" gracePeriod=30 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.842423 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.864157 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"258f752a-780a-4668-bfd4-6276c1a17472","Type":"ContainerStarted","Data":"70d4619eaf0c736c634a306d07c2121b8e39d2ad6f89479fd1e51118b7a87642"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.867142 5136 patch_prober.go:28] interesting pod/controller-manager-7cd44f687c-8mp5h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:56018->10.217.0.54:8443: read: connection reset by peer" start-of-body= Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.867189 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:56018->10.217.0.54:8443: read: connection reset by peer" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.870505 5136 generic.go:334] "Generic (PLEG): container finished" podID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerID="c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50" exitCode=0 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.870576 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerDied","Data":"c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.870603 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerStarted","Data":"e97be04ec45ad4ff430feb5809767da522725df9a1e53411e7387d687914904f"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.881485 5136 generic.go:334] "Generic (PLEG): container finished" podID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerID="e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d" exitCode=0 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.881526 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerDied","Data":"e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.890488 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" podStartSLOduration=9.890470406 podStartE2EDuration="9.890470406s" podCreationTimestamp="2026-03-20 06:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:55.888513301 +0000 UTC m=+268.147824462" watchObservedRunningTime="2026-03-20 06:53:55.890470406 +0000 UTC m=+268.149781557" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.954275 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" podStartSLOduration=27.954256399 podStartE2EDuration="27.954256399s" podCreationTimestamp="2026-03-20 06:53:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:55.94879509 +0000 UTC m=+268.208106241" watchObservedRunningTime="2026-03-20 06:53:55.954256399 +0000 UTC m=+268.213567550" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.220306 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.292713 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.332964 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config\") pod \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.333024 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qpmj\" (UniqueName: \"kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj\") pod \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.333058 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert\") pod \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.333110 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles\") pod \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.333129 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca\") pod \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.333854 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" (UID: "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.334146 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" (UID: "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.334178 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config" (OuterVolumeSpecName: "config") pod "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" (UID: "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.338964 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" (UID: "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.338986 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj" (OuterVolumeSpecName: "kube-api-access-9qpmj") pod "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" (UID: "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d"). InnerVolumeSpecName "kube-api-access-9qpmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.403685 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" path="/var/lib/kubelet/pods/444d6afe-1b85-4b31-92c1-06272dd19195/volumes" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.434929 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.434953 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qpmj\" (UniqueName: \"kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.434963 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.434971 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.434979 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.889226 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" event={"ID":"b5572feb-df7d-4f3a-9b83-3be3de943668","Type":"ContainerStarted","Data":"f373c5d179e20f6e719f303f76a58be0be6442e3e03ad793afc072949610b412"} Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.890562 5136 generic.go:334] "Generic (PLEG): container finished" podID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerID="c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41" exitCode=0 Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.890619 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.890622 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" event={"ID":"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d","Type":"ContainerDied","Data":"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41"} Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.890746 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" event={"ID":"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d","Type":"ContainerDied","Data":"b6a47ad0b8d3083dba91f0461003306d2a38b418ea73efce3c166ddfb80853b4"} Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.890784 5136 scope.go:117] "RemoveContainer" containerID="c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.892783 5136 generic.go:334] "Generic (PLEG): container finished" podID="258f752a-780a-4668-bfd4-6276c1a17472" containerID="ed45d1e3327dfd2e5629bd12f68d4aefb2d2d10f0e8fa7d15a26d034a1816553" exitCode=0 Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.893000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"258f752a-780a-4668-bfd4-6276c1a17472","Type":"ContainerDied","Data":"ed45d1e3327dfd2e5629bd12f68d4aefb2d2d10f0e8fa7d15a26d034a1816553"} Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.911186 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jz6hg" podStartSLOduration=199.911170464 podStartE2EDuration="3m19.911170464s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:56.906103838 +0000 UTC m=+269.165414999" watchObservedRunningTime="2026-03-20 06:53:56.911170464 +0000 UTC m=+269.170481615" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.918145 5136 scope.go:117] "RemoveContainer" containerID="c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41" Mar 20 06:53:56 crc kubenswrapper[5136]: E0320 06:53:56.921081 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41\": container with ID starting with c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41 not found: ID does not exist" containerID="c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.921110 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41"} err="failed to get container status \"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41\": rpc error: code = NotFound desc = could not find container \"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41\": container with ID starting with c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41 not found: ID does not exist" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.925852 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.927156 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.174400 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.255760 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir\") pod \"016ed817-0956-4149-b109-fbdbd9534b4f\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.255919 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access\") pod \"016ed817-0956-4149-b109-fbdbd9534b4f\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.256026 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "016ed817-0956-4149-b109-fbdbd9534b4f" (UID: "016ed817-0956-4149-b109-fbdbd9534b4f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.256247 5136 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.261785 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "016ed817-0956-4149-b109-fbdbd9534b4f" (UID: "016ed817-0956-4149-b109-fbdbd9534b4f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.357224 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.518638 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:53:57 crc kubenswrapper[5136]: E0320 06:53:57.519260 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016ed817-0956-4149-b109-fbdbd9534b4f" containerName="pruner" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.519275 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="016ed817-0956-4149-b109-fbdbd9534b4f" containerName="pruner" Mar 20 06:53:57 crc kubenswrapper[5136]: E0320 06:53:57.519290 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerName="controller-manager" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.519299 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerName="controller-manager" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.519409 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="016ed817-0956-4149-b109-fbdbd9534b4f" containerName="pruner" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.519428 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerName="controller-manager" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.519870 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.522954 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.523199 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.523148 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.523321 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.523404 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.523630 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.530670 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.531000 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.661391 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.661454 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.661476 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfns\" (UniqueName: \"kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.661542 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.661563 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.763392 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.763464 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.763486 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfns\" (UniqueName: \"kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.763518 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.763542 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.764476 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.765499 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.767078 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.784671 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.787171 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfns\" (UniqueName: \"kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.844396 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.901870 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.903308 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"016ed817-0956-4149-b109-fbdbd9534b4f","Type":"ContainerDied","Data":"63159ed80d5b5e8e1081e09634c709298d8870dc48598c6ff1bd3d48d579726f"} Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.903359 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63159ed80d5b5e8e1081e09634c709298d8870dc48598c6ff1bd3d48d579726f" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.035564 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.112524 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.270968 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access\") pod \"258f752a-780a-4668-bfd4-6276c1a17472\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.271848 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir\") pod \"258f752a-780a-4668-bfd4-6276c1a17472\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.271987 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "258f752a-780a-4668-bfd4-6276c1a17472" (UID: "258f752a-780a-4668-bfd4-6276c1a17472"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.274003 5136 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.277850 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "258f752a-780a-4668-bfd4-6276c1a17472" (UID: "258f752a-780a-4668-bfd4-6276c1a17472"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.374726 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.405119 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" path="/var/lib/kubelet/pods/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d/volumes" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.924540 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.924568 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"258f752a-780a-4668-bfd4-6276c1a17472","Type":"ContainerDied","Data":"70d4619eaf0c736c634a306d07c2121b8e39d2ad6f89479fd1e51118b7a87642"} Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.924604 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70d4619eaf0c736c634a306d07c2121b8e39d2ad6f89479fd1e51118b7a87642" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.933790 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" event={"ID":"760c854a-7b9d-4582-9bcc-faf077008e0f","Type":"ContainerStarted","Data":"bcfaf55d80db554feaeb3774c52e47f9f050ba6262ba6f06d4b21a8da6ad81d5"} Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.944414 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" event={"ID":"ba8ace97-a564-4c66-a39d-31d3d1192731","Type":"ContainerStarted","Data":"34c101d578be5c8e4dd690b32bec0f18aff05b8af94343c38b5c583125107f43"} Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.944466 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" event={"ID":"ba8ace97-a564-4c66-a39d-31d3d1192731","Type":"ContainerStarted","Data":"6e7cd78c8651d266a5f2ac19a028eab78f1472c39aa6a54381d2ce52eedcf623"} Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.944898 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.956402 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.969918 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" podStartSLOduration=83.94129221 podStartE2EDuration="1m58.96990361s" podCreationTimestamp="2026-03-20 06:52:00 +0000 UTC" firstStartedPulling="2026-03-20 06:53:23.394620894 +0000 UTC m=+235.653932045" lastFinishedPulling="2026-03-20 06:53:58.423232294 +0000 UTC m=+270.682543445" observedRunningTime="2026-03-20 06:53:58.965627299 +0000 UTC m=+271.224938450" watchObservedRunningTime="2026-03-20 06:53:58.96990361 +0000 UTC m=+271.229214761" Mar 20 06:53:59 crc kubenswrapper[5136]: I0320 06:53:59.000266 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" podStartSLOduration=13.000247155 podStartE2EDuration="13.000247155s" podCreationTimestamp="2026-03-20 06:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:58.993968179 +0000 UTC m=+271.253279330" watchObservedRunningTime="2026-03-20 06:53:59.000247155 +0000 UTC m=+271.259558306" Mar 20 06:53:59 crc kubenswrapper[5136]: I0320 06:53:59.281144 5136 csr.go:261] certificate signing request csr-f4bwt is approved, waiting to be issued Mar 20 06:53:59 crc kubenswrapper[5136]: I0320 06:53:59.287762 5136 csr.go:257] certificate signing request csr-f4bwt is issued Mar 20 06:53:59 crc kubenswrapper[5136]: I0320 06:53:59.955102 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" event={"ID":"760c854a-7b9d-4582-9bcc-faf077008e0f","Type":"ContainerDied","Data":"bcfaf55d80db554feaeb3774c52e47f9f050ba6262ba6f06d4b21a8da6ad81d5"} Mar 20 06:53:59 crc kubenswrapper[5136]: I0320 06:53:59.955059 5136 generic.go:334] "Generic (PLEG): container finished" podID="760c854a-7b9d-4582-9bcc-faf077008e0f" containerID="bcfaf55d80db554feaeb3774c52e47f9f050ba6262ba6f06d4b21a8da6ad81d5" exitCode=0 Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.131632 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566494-v7mrb"] Mar 20 06:54:00 crc kubenswrapper[5136]: E0320 06:54:00.131924 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258f752a-780a-4668-bfd4-6276c1a17472" containerName="pruner" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.132062 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="258f752a-780a-4668-bfd4-6276c1a17472" containerName="pruner" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.132257 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="258f752a-780a-4668-bfd4-6276c1a17472" containerName="pruner" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.135090 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.135395 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566494-v7mrb"] Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.139854 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.205263 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f\") pod \"auto-csr-approver-29566494-v7mrb\" (UID: \"793ba114-16f6-4ad2-bc47-daee6a819a00\") " pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.289649 5136 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-23 20:07:52.702478936 +0000 UTC Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.289689 5136 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6685h13m52.412793306s for next certificate rotation Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.307211 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f\") pod \"auto-csr-approver-29566494-v7mrb\" (UID: \"793ba114-16f6-4ad2-bc47-daee6a819a00\") " pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.337871 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f\") pod \"auto-csr-approver-29566494-v7mrb\" (UID: \"793ba114-16f6-4ad2-bc47-daee6a819a00\") " pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.433590 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.463205 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.461075 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.626566 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjzr9\" (UniqueName: \"kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9\") pod \"760c854a-7b9d-4582-9bcc-faf077008e0f\" (UID: \"760c854a-7b9d-4582-9bcc-faf077008e0f\") " Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.634079 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9" (OuterVolumeSpecName: "kube-api-access-pjzr9") pod "760c854a-7b9d-4582-9bcc-faf077008e0f" (UID: "760c854a-7b9d-4582-9bcc-faf077008e0f"). InnerVolumeSpecName "kube-api-access-pjzr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.729821 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjzr9\" (UniqueName: \"kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.822787 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566494-v7mrb"] Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.965286 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" event={"ID":"793ba114-16f6-4ad2-bc47-daee6a819a00","Type":"ContainerStarted","Data":"3ffc4f2f1913877d9ab0804682ad7ba8df5735f227b587b972cc9770d1884537"} Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.967080 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" event={"ID":"760c854a-7b9d-4582-9bcc-faf077008e0f","Type":"ContainerDied","Data":"e58d8ba4116f562c0c29dade18892c723e48babb4ac158f18e7e7f62c2685db2"} Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.967147 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58d8ba4116f562c0c29dade18892c723e48babb4ac158f18e7e7f62c2685db2" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.967213 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.977889 5136 generic.go:334] "Generic (PLEG): container finished" podID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerID="4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde" exitCode=0 Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.977979 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerDied","Data":"4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde"} Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.984086 5136 generic.go:334] "Generic (PLEG): container finished" podID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerID="dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968" exitCode=0 Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.984110 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerDied","Data":"dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968"} Mar 20 06:54:02 crc kubenswrapper[5136]: I0320 06:54:02.912254 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.010077 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerStarted","Data":"f46b7933668706885051cf2336daf80ae5d49b441259e450e3a49c583c6aa84a"} Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.013086 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerStarted","Data":"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc"} Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.015374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerStarted","Data":"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5"} Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.016792 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerStarted","Data":"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784"} Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.017989 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" event={"ID":"793ba114-16f6-4ad2-bc47-daee6a819a00","Type":"ContainerStarted","Data":"27efcdb323bdc3f56a43d4c3d542dd5fa1e9865775e2f1ad9ed6c1b623aed3e5"} Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.047041 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h56wl" podStartSLOduration=25.175396614 podStartE2EDuration="35.047023574s" podCreationTimestamp="2026-03-20 06:53:31 +0000 UTC" firstStartedPulling="2026-03-20 06:53:55.87594977 +0000 UTC m=+268.135260921" lastFinishedPulling="2026-03-20 06:54:05.74757673 +0000 UTC m=+278.006887881" observedRunningTime="2026-03-20 06:54:06.043093085 +0000 UTC m=+278.302404256" watchObservedRunningTime="2026-03-20 06:54:06.047023574 +0000 UTC m=+278.306334725" Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.080299 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zvjw4" podStartSLOduration=26.257443486 podStartE2EDuration="36.080282755s" podCreationTimestamp="2026-03-20 06:53:30 +0000 UTC" firstStartedPulling="2026-03-20 06:53:55.884577843 +0000 UTC m=+268.143888994" lastFinishedPulling="2026-03-20 06:54:05.707417112 +0000 UTC m=+277.966728263" observedRunningTime="2026-03-20 06:54:06.079865122 +0000 UTC m=+278.339176263" watchObservedRunningTime="2026-03-20 06:54:06.080282755 +0000 UTC m=+278.339593906" Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.091301 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" podStartSLOduration=2.1970830120000002 podStartE2EDuration="6.091286817s" podCreationTimestamp="2026-03-20 06:54:00 +0000 UTC" firstStartedPulling="2026-03-20 06:54:01.836257622 +0000 UTC m=+274.095568774" lastFinishedPulling="2026-03-20 06:54:05.730461428 +0000 UTC m=+277.989772579" observedRunningTime="2026-03-20 06:54:06.08984211 +0000 UTC m=+278.349153261" watchObservedRunningTime="2026-03-20 06:54:06.091286817 +0000 UTC m=+278.350597968" Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.718043 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.718634 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerName="controller-manager" containerID="cri-o://34c101d578be5c8e4dd690b32bec0f18aff05b8af94343c38b5c583125107f43" gracePeriod=30 Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.808045 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.808294 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" podUID="433e77aa-fe22-43d7-87ed-0a9219b61762" containerName="route-controller-manager" containerID="cri-o://0fb2ad703d57143cc353d8d80b57c7e9e8375a9fe5053d2f01f256b52277bbbb" gracePeriod=30 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.025179 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerID="aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.025243 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerDied","Data":"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.027429 5136 generic.go:334] "Generic (PLEG): container finished" podID="793ba114-16f6-4ad2-bc47-daee6a819a00" containerID="27efcdb323bdc3f56a43d4c3d542dd5fa1e9865775e2f1ad9ed6c1b623aed3e5" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.027483 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" event={"ID":"793ba114-16f6-4ad2-bc47-daee6a819a00","Type":"ContainerDied","Data":"27efcdb323bdc3f56a43d4c3d542dd5fa1e9865775e2f1ad9ed6c1b623aed3e5"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.029066 5136 generic.go:334] "Generic (PLEG): container finished" podID="433e77aa-fe22-43d7-87ed-0a9219b61762" containerID="0fb2ad703d57143cc353d8d80b57c7e9e8375a9fe5053d2f01f256b52277bbbb" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.029132 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" event={"ID":"433e77aa-fe22-43d7-87ed-0a9219b61762","Type":"ContainerDied","Data":"0fb2ad703d57143cc353d8d80b57c7e9e8375a9fe5053d2f01f256b52277bbbb"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.030696 5136 generic.go:334] "Generic (PLEG): container finished" podID="c390cc35-103e-4376-a377-789d27e92301" containerID="f46b7933668706885051cf2336daf80ae5d49b441259e450e3a49c583c6aa84a" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.030736 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerDied","Data":"f46b7933668706885051cf2336daf80ae5d49b441259e450e3a49c583c6aa84a"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.034013 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerID="0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.034082 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerDied","Data":"0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.036437 5136 generic.go:334] "Generic (PLEG): container finished" podID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerID="34c101d578be5c8e4dd690b32bec0f18aff05b8af94343c38b5c583125107f43" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.036500 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" event={"ID":"ba8ace97-a564-4c66-a39d-31d3d1192731","Type":"ContainerDied","Data":"34c101d578be5c8e4dd690b32bec0f18aff05b8af94343c38b5c583125107f43"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.540215 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 20 06:54:09 crc kubenswrapper[5136]: E0320 06:54:07.540604 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" containerName="oc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.540615 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" containerName="oc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.540730 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" containerName="oc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.541078 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.543642 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.543840 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.551545 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.710084 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.710121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.811151 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.811191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.811356 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.834362 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.857772 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.915349 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.946444 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:09 crc kubenswrapper[5136]: E0320 06:54:07.947049 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerName="controller-manager" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.947087 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerName="controller-manager" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.947217 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerName="controller-manager" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.947653 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.956000 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.013191 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkfns\" (UniqueName: \"kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns\") pod \"ba8ace97-a564-4c66-a39d-31d3d1192731\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.013244 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert\") pod \"ba8ace97-a564-4c66-a39d-31d3d1192731\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.013330 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca\") pod \"ba8ace97-a564-4c66-a39d-31d3d1192731\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.013348 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config\") pod \"ba8ace97-a564-4c66-a39d-31d3d1192731\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.013378 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles\") pod \"ba8ace97-a564-4c66-a39d-31d3d1192731\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014163 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ba8ace97-a564-4c66-a39d-31d3d1192731" (UID: "ba8ace97-a564-4c66-a39d-31d3d1192731"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014219 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca" (OuterVolumeSpecName: "client-ca") pod "ba8ace97-a564-4c66-a39d-31d3d1192731" (UID: "ba8ace97-a564-4c66-a39d-31d3d1192731"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014277 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config" (OuterVolumeSpecName: "config") pod "ba8ace97-a564-4c66-a39d-31d3d1192731" (UID: "ba8ace97-a564-4c66-a39d-31d3d1192731"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014552 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014602 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014616 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.016954 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ba8ace97-a564-4c66-a39d-31d3d1192731" (UID: "ba8ace97-a564-4c66-a39d-31d3d1192731"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.017619 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns" (OuterVolumeSpecName: "kube-api-access-kkfns") pod "ba8ace97-a564-4c66-a39d-31d3d1192731" (UID: "ba8ace97-a564-4c66-a39d-31d3d1192731"). InnerVolumeSpecName "kube-api-access-kkfns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.043642 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.044164 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" event={"ID":"ba8ace97-a564-4c66-a39d-31d3d1192731","Type":"ContainerDied","Data":"6e7cd78c8651d266a5f2ac19a028eab78f1472c39aa6a54381d2ce52eedcf623"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.046052 5136 scope.go:117] "RemoveContainer" containerID="34c101d578be5c8e4dd690b32bec0f18aff05b8af94343c38b5c583125107f43" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.097805 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.100956 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116185 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116230 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116314 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116602 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116647 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqzq5\" (UniqueName: \"kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116787 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkfns\" (UniqueName: \"kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116803 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.218402 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.218442 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqzq5\" (UniqueName: \"kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.218495 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.218514 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.218580 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.219639 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.219757 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.219799 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.223603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.235549 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqzq5\" (UniqueName: \"kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.260313 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.405877 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" path="/var/lib/kubelet/pods/ba8ace97-a564-4c66-a39d-31d3d1192731/volumes" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.844968 5136 patch_prober.go:28] interesting pod/controller-manager-5fcfc77697-j6jd6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.845261 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.055124 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" event={"ID":"793ba114-16f6-4ad2-bc47-daee6a819a00","Type":"ContainerDied","Data":"3ffc4f2f1913877d9ab0804682ad7ba8df5735f227b587b972cc9770d1884537"} Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.055650 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ffc4f2f1913877d9ab0804682ad7ba8df5735f227b587b972cc9770d1884537" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.057480 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerStarted","Data":"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8"} Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.059237 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.059564 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" event={"ID":"433e77aa-fe22-43d7-87ed-0a9219b61762","Type":"ContainerDied","Data":"dd09161821b366f7bf1ba043da9d6862a2982103d6804207a3cd3950019ad2ae"} Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.059593 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd09161821b366f7bf1ba043da9d6862a2982103d6804207a3cd3950019ad2ae" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.079921 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w76x4" podStartSLOduration=25.844895019 podStartE2EDuration="39.07990667s" podCreationTimestamp="2026-03-20 06:53:31 +0000 UTC" firstStartedPulling="2026-03-20 06:53:55.799104838 +0000 UTC m=+268.058415989" lastFinishedPulling="2026-03-20 06:54:09.034116489 +0000 UTC m=+281.293427640" observedRunningTime="2026-03-20 06:54:10.077004924 +0000 UTC m=+282.336316095" watchObservedRunningTime="2026-03-20 06:54:10.07990667 +0000 UTC m=+282.339217821" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.083108 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.194150 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:10 crc kubenswrapper[5136]: W0320 06:54:10.205799 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcada42b5_7a5d_47d5_84e7_6c5612db1132.slice/crio-3d3b4204912003a659f2d723f3606e1a424a5ae09bdc6ee3049fa1cfb5307c75 WatchSource:0}: Error finding container 3d3b4204912003a659f2d723f3606e1a424a5ae09bdc6ee3049fa1cfb5307c75: Status 404 returned error can't find the container with id 3d3b4204912003a659f2d723f3606e1a424a5ae09bdc6ee3049fa1cfb5307c75 Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.209020 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 20 06:54:10 crc kubenswrapper[5136]: W0320 06:54:10.216245 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod49b77392_c7b9_4b0b_9320_6e4fcce120d1.slice/crio-a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a WatchSource:0}: Error finding container a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a: Status 404 returned error can't find the container with id a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.252410 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca\") pod \"433e77aa-fe22-43d7-87ed-0a9219b61762\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.252457 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config\") pod \"433e77aa-fe22-43d7-87ed-0a9219b61762\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.252512 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert\") pod \"433e77aa-fe22-43d7-87ed-0a9219b61762\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.252537 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sf5v\" (UniqueName: \"kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v\") pod \"433e77aa-fe22-43d7-87ed-0a9219b61762\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.252584 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f\") pod \"793ba114-16f6-4ad2-bc47-daee6a819a00\" (UID: \"793ba114-16f6-4ad2-bc47-daee6a819a00\") " Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.253857 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config" (OuterVolumeSpecName: "config") pod "433e77aa-fe22-43d7-87ed-0a9219b61762" (UID: "433e77aa-fe22-43d7-87ed-0a9219b61762"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.254122 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca" (OuterVolumeSpecName: "client-ca") pod "433e77aa-fe22-43d7-87ed-0a9219b61762" (UID: "433e77aa-fe22-43d7-87ed-0a9219b61762"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.258250 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v" (OuterVolumeSpecName: "kube-api-access-2sf5v") pod "433e77aa-fe22-43d7-87ed-0a9219b61762" (UID: "433e77aa-fe22-43d7-87ed-0a9219b61762"). InnerVolumeSpecName "kube-api-access-2sf5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.258522 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f" (OuterVolumeSpecName: "kube-api-access-c9d9f") pod "793ba114-16f6-4ad2-bc47-daee6a819a00" (UID: "793ba114-16f6-4ad2-bc47-daee6a819a00"). InnerVolumeSpecName "kube-api-access-c9d9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.259374 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "433e77aa-fe22-43d7-87ed-0a9219b61762" (UID: "433e77aa-fe22-43d7-87ed-0a9219b61762"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.354013 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.354267 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sf5v\" (UniqueName: \"kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.354421 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.354550 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.354670 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.038392 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.038455 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.064962 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" event={"ID":"cada42b5-7a5d-47d5-84e7-6c5612db1132","Type":"ContainerStarted","Data":"3d3b4204912003a659f2d723f3606e1a424a5ae09bdc6ee3049fa1cfb5307c75"} Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.067295 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerStarted","Data":"2d4dba2ff1549dc6f8aaa33906c467cbc9d1690e7ac80aacdc997a2f55cc4025"} Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.068290 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.068333 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.068280 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"49b77392-c7b9-4b0b-9320-6e4fcce120d1","Type":"ContainerStarted","Data":"a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a"} Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.089203 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ccgmd" podStartSLOduration=25.138706290000002 podStartE2EDuration="39.089187864s" podCreationTimestamp="2026-03-20 06:53:32 +0000 UTC" firstStartedPulling="2026-03-20 06:53:55.808213397 +0000 UTC m=+268.067524558" lastFinishedPulling="2026-03-20 06:54:09.758694991 +0000 UTC m=+282.018006132" observedRunningTime="2026-03-20 06:54:11.086604819 +0000 UTC m=+283.345915970" watchObservedRunningTime="2026-03-20 06:54:11.089187864 +0000 UTC m=+283.348499015" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.101495 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.107877 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.332885 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.400272 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.441461 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.441503 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.496570 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.056430 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.056698 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.074532 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"49b77392-c7b9-4b0b-9320-6e4fcce120d1","Type":"ContainerStarted","Data":"f9289243a285961376cc9a2e6f5a73de90f457c5a2f13c754d6c53831bd02f8b"} Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.076454 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerStarted","Data":"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c"} Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.077582 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" event={"ID":"cada42b5-7a5d-47d5-84e7-6c5612db1132","Type":"ContainerStarted","Data":"489bdb392a45bad16e501fa0a0fb2098e4fc47999ee7306501643766f25cd617"} Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.078243 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.079561 5136 patch_prober.go:28] interesting pod/controller-manager-5bcf5ffddd-j6v5f container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.079605 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.094899 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=5.09487848 podStartE2EDuration="5.09487848s" podCreationTimestamp="2026-03-20 06:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:12.09092698 +0000 UTC m=+284.350238141" watchObservedRunningTime="2026-03-20 06:54:12.09487848 +0000 UTC m=+284.354189631" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.124332 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.133166 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" podStartSLOduration=6.133153715 podStartE2EDuration="6.133153715s" podCreationTimestamp="2026-03-20 06:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:12.132429631 +0000 UTC m=+284.391740792" watchObservedRunningTime="2026-03-20 06:54:12.133153715 +0000 UTC m=+284.392464866" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.135188 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tk985" podStartSLOduration=2.098464156 podStartE2EDuration="43.135182502s" podCreationTimestamp="2026-03-20 06:53:29 +0000 UTC" firstStartedPulling="2026-03-20 06:53:30.62037298 +0000 UTC m=+242.879684131" lastFinishedPulling="2026-03-20 06:54:11.657091326 +0000 UTC m=+283.916402477" observedRunningTime="2026-03-20 06:54:12.115376452 +0000 UTC m=+284.374687603" watchObservedRunningTime="2026-03-20 06:54:12.135182502 +0000 UTC m=+284.394493653" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.403278 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="433e77aa-fe22-43d7-87ed-0a9219b61762" path="/var/lib/kubelet/pods/433e77aa-fe22-43d7-87ed-0a9219b61762/volumes" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.478463 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.478622 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.083578 5136 generic.go:334] "Generic (PLEG): container finished" podID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerID="61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c" exitCode=0 Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.083628 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerDied","Data":"61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c"} Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.089056 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerDied","Data":"6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91"} Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.089151 5136 generic.go:334] "Generic (PLEG): container finished" podID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerID="6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91" exitCode=0 Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.092953 5136 generic.go:334] "Generic (PLEG): container finished" podID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerID="53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71" exitCode=0 Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.092989 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerDied","Data":"53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71"} Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.094450 5136 generic.go:334] "Generic (PLEG): container finished" podID="49b77392-c7b9-4b0b-9320-6e4fcce120d1" containerID="f9289243a285961376cc9a2e6f5a73de90f457c5a2f13c754d6c53831bd02f8b" exitCode=0 Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.094686 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"49b77392-c7b9-4b0b-9320-6e4fcce120d1","Type":"ContainerDied","Data":"f9289243a285961376cc9a2e6f5a73de90f457c5a2f13c754d6c53831bd02f8b"} Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.098664 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w76x4" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="registry-server" probeResult="failure" output=< Mar 20 06:54:13 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 06:54:13 crc kubenswrapper[5136]: > Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.100210 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.521506 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ccgmd" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="registry-server" probeResult="failure" output=< Mar 20 06:54:13 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 06:54:13 crc kubenswrapper[5136]: > Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.102076 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerStarted","Data":"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780"} Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.104515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerStarted","Data":"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1"} Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.107394 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerStarted","Data":"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae"} Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.119593 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gnspw" podStartSLOduration=3.126380684 podStartE2EDuration="46.11958471s" podCreationTimestamp="2026-03-20 06:53:28 +0000 UTC" firstStartedPulling="2026-03-20 06:53:30.5870744 +0000 UTC m=+242.846385551" lastFinishedPulling="2026-03-20 06:54:13.580278426 +0000 UTC m=+285.839589577" observedRunningTime="2026-03-20 06:54:14.119050054 +0000 UTC m=+286.378361205" watchObservedRunningTime="2026-03-20 06:54:14.11958471 +0000 UTC m=+286.378895861" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.169302 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hjck6" podStartSLOduration=3.2653753070000002 podStartE2EDuration="46.169289312s" podCreationTimestamp="2026-03-20 06:53:28 +0000 UTC" firstStartedPulling="2026-03-20 06:53:30.593663138 +0000 UTC m=+242.852974289" lastFinishedPulling="2026-03-20 06:54:13.497577143 +0000 UTC m=+285.756888294" observedRunningTime="2026-03-20 06:54:14.167574505 +0000 UTC m=+286.426885656" watchObservedRunningTime="2026-03-20 06:54:14.169289312 +0000 UTC m=+286.428600463" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.171099 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5cc6n" podStartSLOduration=3.2146032780000002 podStartE2EDuration="46.171092301s" podCreationTimestamp="2026-03-20 06:53:28 +0000 UTC" firstStartedPulling="2026-03-20 06:53:30.612281845 +0000 UTC m=+242.871592996" lastFinishedPulling="2026-03-20 06:54:13.568770868 +0000 UTC m=+285.828082019" observedRunningTime="2026-03-20 06:54:14.148920303 +0000 UTC m=+286.408231444" watchObservedRunningTime="2026-03-20 06:54:14.171092301 +0000 UTC m=+286.430403452" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.414026 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532026 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:14 crc kubenswrapper[5136]: E0320 06:54:14.532223 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793ba114-16f6-4ad2-bc47-daee6a819a00" containerName="oc" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532235 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="793ba114-16f6-4ad2-bc47-daee6a819a00" containerName="oc" Mar 20 06:54:14 crc kubenswrapper[5136]: E0320 06:54:14.532246 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b77392-c7b9-4b0b-9320-6e4fcce120d1" containerName="pruner" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532253 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b77392-c7b9-4b0b-9320-6e4fcce120d1" containerName="pruner" Mar 20 06:54:14 crc kubenswrapper[5136]: E0320 06:54:14.532264 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433e77aa-fe22-43d7-87ed-0a9219b61762" containerName="route-controller-manager" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532272 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="433e77aa-fe22-43d7-87ed-0a9219b61762" containerName="route-controller-manager" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532367 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="433e77aa-fe22-43d7-87ed-0a9219b61762" containerName="route-controller-manager" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532378 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="793ba114-16f6-4ad2-bc47-daee6a819a00" containerName="oc" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532385 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b77392-c7b9-4b0b-9320-6e4fcce120d1" containerName="pruner" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532726 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.535010 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.535218 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.535334 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.536148 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.536275 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.536296 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.548658 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.608484 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access\") pod \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.608543 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir\") pod \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.608726 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "49b77392-c7b9-4b0b-9320-6e4fcce120d1" (UID: "49b77392-c7b9-4b0b-9320-6e4fcce120d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.608922 5136 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.615891 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "49b77392-c7b9-4b0b-9320-6e4fcce120d1" (UID: "49b77392-c7b9-4b0b-9320-6e4fcce120d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.709529 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.709606 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.709668 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-759tv\" (UniqueName: \"kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.709744 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.709784 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.810655 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-759tv\" (UniqueName: \"kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.810745 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.810785 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.810808 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.811721 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.812025 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.821727 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.829098 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-759tv\" (UniqueName: \"kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.843691 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.118682 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"49b77392-c7b9-4b0b-9320-6e4fcce120d1","Type":"ContainerDied","Data":"a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a"} Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.118925 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a" Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.118792 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.179105 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:15 crc kubenswrapper[5136]: W0320 06:54:15.189901 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a3b5dcf_bcd1_4502_88f8_50c39af7e940.slice/crio-6a315e78b7f2d9cc8f4d33cb5e467ab336b28cd1fd4260458044c4145ecd3b51 WatchSource:0}: Error finding container 6a315e78b7f2d9cc8f4d33cb5e467ab336b28cd1fd4260458044c4145ecd3b51: Status 404 returned error can't find the container with id 6a315e78b7f2d9cc8f4d33cb5e467ab336b28cd1fd4260458044c4145ecd3b51 Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.821787 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.821898 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.825848 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.826093 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h56wl" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="registry-server" containerID="cri-o://0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc" gracePeriod=2 Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.125391 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" event={"ID":"6a3b5dcf-bcd1-4502-88f8-50c39af7e940","Type":"ContainerStarted","Data":"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f"} Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.125466 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" event={"ID":"6a3b5dcf-bcd1-4502-88f8-50c39af7e940","Type":"ContainerStarted","Data":"6a315e78b7f2d9cc8f4d33cb5e467ab336b28cd1fd4260458044c4145ecd3b51"} Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.537013 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.542590 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.544339 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.545612 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.561744 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.737620 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.737670 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.737724 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.838847 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.838946 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.838963 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.838979 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.839049 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.857642 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.901657 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.920373 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.041404 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content\") pod \"1cf24582-9ee1-4a25-9293-b116d55e6465\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.041793 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities\") pod \"1cf24582-9ee1-4a25-9293-b116d55e6465\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.041873 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xph6j\" (UniqueName: \"kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j\") pod \"1cf24582-9ee1-4a25-9293-b116d55e6465\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.043407 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities" (OuterVolumeSpecName: "utilities") pod "1cf24582-9ee1-4a25-9293-b116d55e6465" (UID: "1cf24582-9ee1-4a25-9293-b116d55e6465"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.046227 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j" (OuterVolumeSpecName: "kube-api-access-xph6j") pod "1cf24582-9ee1-4a25-9293-b116d55e6465" (UID: "1cf24582-9ee1-4a25-9293-b116d55e6465"). InnerVolumeSpecName "kube-api-access-xph6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.085708 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cf24582-9ee1-4a25-9293-b116d55e6465" (UID: "1cf24582-9ee1-4a25-9293-b116d55e6465"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134748 5136 generic.go:334] "Generic (PLEG): container finished" podID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerID="0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc" exitCode=0 Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134849 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerDied","Data":"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc"} Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134917 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerDied","Data":"e97be04ec45ad4ff430feb5809767da522725df9a1e53411e7387d687914904f"} Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134936 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134954 5136 scope.go:117] "RemoveContainer" containerID="0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134880 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.143762 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.143790 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xph6j\" (UniqueName: \"kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.143802 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.146691 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.150514 5136 scope.go:117] "RemoveContainer" containerID="4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.157543 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" podStartSLOduration=11.157525504 podStartE2EDuration="11.157525504s" podCreationTimestamp="2026-03-20 06:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:17.152377235 +0000 UTC m=+289.411688396" watchObservedRunningTime="2026-03-20 06:54:17.157525504 +0000 UTC m=+289.416836655" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.183830 5136 scope.go:117] "RemoveContainer" containerID="c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.197342 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.197392 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.203939 5136 scope.go:117] "RemoveContainer" containerID="0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc" Mar 20 06:54:17 crc kubenswrapper[5136]: E0320 06:54:17.204483 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc\": container with ID starting with 0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc not found: ID does not exist" containerID="0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.204549 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc"} err="failed to get container status \"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc\": rpc error: code = NotFound desc = could not find container \"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc\": container with ID starting with 0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc not found: ID does not exist" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.204598 5136 scope.go:117] "RemoveContainer" containerID="4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde" Mar 20 06:54:17 crc kubenswrapper[5136]: E0320 06:54:17.205011 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde\": container with ID starting with 4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde not found: ID does not exist" containerID="4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.205040 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde"} err="failed to get container status \"4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde\": rpc error: code = NotFound desc = could not find container \"4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde\": container with ID starting with 4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde not found: ID does not exist" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.205061 5136 scope.go:117] "RemoveContainer" containerID="c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50" Mar 20 06:54:17 crc kubenswrapper[5136]: E0320 06:54:17.205459 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50\": container with ID starting with c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50 not found: ID does not exist" containerID="c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.205507 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50"} err="failed to get container status \"c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50\": rpc error: code = NotFound desc = could not find container \"c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50\": container with ID starting with c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50 not found: ID does not exist" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.303722 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 20 06:54:17 crc kubenswrapper[5136]: W0320 06:54:17.310505 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod84671130_5991_4032_964a_01c61fefc56a.slice/crio-a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0 WatchSource:0}: Error finding container a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0: Status 404 returned error can't find the container with id a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0 Mar 20 06:54:18 crc kubenswrapper[5136]: I0320 06:54:18.142433 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84671130-5991-4032-964a-01c61fefc56a","Type":"ContainerStarted","Data":"41fd895caa40c1a2b0a6165ebaa7bf7c883b138febb47e401574b2b95cc9077c"} Mar 20 06:54:18 crc kubenswrapper[5136]: I0320 06:54:18.142835 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84671130-5991-4032-964a-01c61fefc56a","Type":"ContainerStarted","Data":"a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0"} Mar 20 06:54:18 crc kubenswrapper[5136]: I0320 06:54:18.410441 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" path="/var/lib/kubelet/pods/1cf24582-9ee1-4a25-9293-b116d55e6465/volumes" Mar 20 06:54:18 crc kubenswrapper[5136]: I0320 06:54:18.959434 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:54:18 crc kubenswrapper[5136]: I0320 06:54:18.959498 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.025530 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.052452 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.052539 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.101663 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.173236 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.173209477 podStartE2EDuration="3.173209477s" podCreationTimestamp="2026-03-20 06:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:19.16538419 +0000 UTC m=+291.424695351" watchObservedRunningTime="2026-03-20 06:54:19.173209477 +0000 UTC m=+291.432520648" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.202804 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.213707 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.320617 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.321024 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.356940 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.466293 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.466361 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.508602 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:20 crc kubenswrapper[5136]: I0320 06:54:20.201570 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:20 crc kubenswrapper[5136]: I0320 06:54:20.204993 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.127891 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.183936 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.229765 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.230063 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tk985" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="registry-server" containerID="cri-o://633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c" gracePeriod=2 Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.428406 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.429030 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5cc6n" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="registry-server" containerID="cri-o://96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae" gracePeriod=2 Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.532866 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.587470 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.734809 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.825647 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities\") pod \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.825722 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpmf4\" (UniqueName: \"kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4\") pod \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.825734 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.825806 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content\") pod \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.826628 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities" (OuterVolumeSpecName: "utilities") pod "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" (UID: "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.830980 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4" (OuterVolumeSpecName: "kube-api-access-qpmf4") pod "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" (UID: "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e"). InnerVolumeSpecName "kube-api-access-qpmf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.878076 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" (UID: "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927487 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content\") pod \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927610 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24rh4\" (UniqueName: \"kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4\") pod \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927650 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities\") pod \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927915 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927934 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927944 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpmf4\" (UniqueName: \"kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.929407 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities" (OuterVolumeSpecName: "utilities") pod "b5f9659e-73fb-4389-8d6e-b739dfa94d4b" (UID: "b5f9659e-73fb-4389-8d6e-b739dfa94d4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.930417 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4" (OuterVolumeSpecName: "kube-api-access-24rh4") pod "b5f9659e-73fb-4389-8d6e-b739dfa94d4b" (UID: "b5f9659e-73fb-4389-8d6e-b739dfa94d4b"). InnerVolumeSpecName "kube-api-access-24rh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.978038 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5f9659e-73fb-4389-8d6e-b739dfa94d4b" (UID: "b5f9659e-73fb-4389-8d6e-b739dfa94d4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.028982 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.029191 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.029261 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24rh4\" (UniqueName: \"kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.179259 5136 generic.go:334] "Generic (PLEG): container finished" podID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerID="96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae" exitCode=0 Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.179294 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.179325 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerDied","Data":"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae"} Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.180868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerDied","Data":"e01d0fc1f5e7b135553c58dae30642386c72820cdb0d76abc481f734a7517ae7"} Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.180896 5136 scope.go:117] "RemoveContainer" containerID="96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.186600 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerID="633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c" exitCode=0 Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.186645 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.186727 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerDied","Data":"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c"} Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.186759 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerDied","Data":"0fb7591931908d54cba79b40ec6231538415c3ba45eb54605b5ec4b5dd387ac9"} Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.207964 5136 scope.go:117] "RemoveContainer" containerID="6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.234873 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.245017 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.246209 5136 scope.go:117] "RemoveContainer" containerID="cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.249133 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.253015 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.262738 5136 scope.go:117] "RemoveContainer" containerID="96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.263015 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae\": container with ID starting with 96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae not found: ID does not exist" containerID="96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263043 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae"} err="failed to get container status \"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae\": rpc error: code = NotFound desc = could not find container \"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae\": container with ID starting with 96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae not found: ID does not exist" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263061 5136 scope.go:117] "RemoveContainer" containerID="6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.263402 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91\": container with ID starting with 6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91 not found: ID does not exist" containerID="6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263435 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91"} err="failed to get container status \"6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91\": rpc error: code = NotFound desc = could not find container \"6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91\": container with ID starting with 6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91 not found: ID does not exist" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263452 5136 scope.go:117] "RemoveContainer" containerID="cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.263681 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df\": container with ID starting with cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df not found: ID does not exist" containerID="cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263704 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df"} err="failed to get container status \"cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df\": rpc error: code = NotFound desc = could not find container \"cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df\": container with ID starting with cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df not found: ID does not exist" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263720 5136 scope.go:117] "RemoveContainer" containerID="633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.277675 5136 scope.go:117] "RemoveContainer" containerID="0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.296460 5136 scope.go:117] "RemoveContainer" containerID="9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.312223 5136 scope.go:117] "RemoveContainer" containerID="633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.312613 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c\": container with ID starting with 633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c not found: ID does not exist" containerID="633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.312658 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c"} err="failed to get container status \"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c\": rpc error: code = NotFound desc = could not find container \"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c\": container with ID starting with 633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c not found: ID does not exist" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.312685 5136 scope.go:117] "RemoveContainer" containerID="0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.313103 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c\": container with ID starting with 0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c not found: ID does not exist" containerID="0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.313149 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c"} err="failed to get container status \"0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c\": rpc error: code = NotFound desc = could not find container \"0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c\": container with ID starting with 0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c not found: ID does not exist" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.313177 5136 scope.go:117] "RemoveContainer" containerID="9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.313471 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3\": container with ID starting with 9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3 not found: ID does not exist" containerID="9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.313496 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3"} err="failed to get container status \"9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3\": rpc error: code = NotFound desc = could not find container \"9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3\": container with ID starting with 9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3 not found: ID does not exist" Mar 20 06:54:24 crc kubenswrapper[5136]: I0320 06:54:24.417375 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" path="/var/lib/kubelet/pods/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e/volumes" Mar 20 06:54:24 crc kubenswrapper[5136]: I0320 06:54:24.419733 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" path="/var/lib/kubelet/pods/b5f9659e-73fb-4389-8d6e-b739dfa94d4b/volumes" Mar 20 06:54:24 crc kubenswrapper[5136]: I0320 06:54:24.832965 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:54:24 crc kubenswrapper[5136]: I0320 06:54:24.833308 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ccgmd" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="registry-server" containerID="cri-o://2d4dba2ff1549dc6f8aaa33906c467cbc9d1690e7ac80aacdc997a2f55cc4025" gracePeriod=2 Mar 20 06:54:25 crc kubenswrapper[5136]: I0320 06:54:25.207956 5136 generic.go:334] "Generic (PLEG): container finished" podID="c390cc35-103e-4376-a377-789d27e92301" containerID="2d4dba2ff1549dc6f8aaa33906c467cbc9d1690e7ac80aacdc997a2f55cc4025" exitCode=0 Mar 20 06:54:25 crc kubenswrapper[5136]: I0320 06:54:25.208025 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerDied","Data":"2d4dba2ff1549dc6f8aaa33906c467cbc9d1690e7ac80aacdc997a2f55cc4025"} Mar 20 06:54:25 crc kubenswrapper[5136]: I0320 06:54:25.461169 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerName="oauth-openshift" containerID="cri-o://123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857" gracePeriod=15 Mar 20 06:54:25 crc kubenswrapper[5136]: I0320 06:54:25.968480 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:25 crc kubenswrapper[5136]: I0320 06:54:25.972424 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.074581 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.074639 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.074676 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities\") pod \"c390cc35-103e-4376-a377-789d27e92301\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.074693 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwfcd\" (UniqueName: \"kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.075739 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities" (OuterVolumeSpecName: "utilities") pod "c390cc35-103e-4376-a377-789d27e92301" (UID: "c390cc35-103e-4376-a377-789d27e92301"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.075793 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.075882 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.075934 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.075958 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076329 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076369 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076381 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076405 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content\") pod \"c390cc35-103e-4376-a377-789d27e92301\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076460 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076498 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077025 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077099 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077380 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077442 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077473 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077519 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8zcs\" (UniqueName: \"kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs\") pod \"c390cc35-103e-4376-a377-789d27e92301\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078336 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078571 5136 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078587 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078597 5136 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078606 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078617 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078874 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.081601 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.082064 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs" (OuterVolumeSpecName: "kube-api-access-x8zcs") pod "c390cc35-103e-4376-a377-789d27e92301" (UID: "c390cc35-103e-4376-a377-789d27e92301"). InnerVolumeSpecName "kube-api-access-x8zcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.082518 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd" (OuterVolumeSpecName: "kube-api-access-zwfcd") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "kube-api-access-zwfcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.083173 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.082697 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.083004 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.083434 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.083567 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.083719 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.084542 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179693 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179742 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179757 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwfcd\" (UniqueName: \"kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179769 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179782 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179796 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179831 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179847 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179861 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179873 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179885 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8zcs\" (UniqueName: \"kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.217256 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.217175 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerDied","Data":"6945d5a08a47f91726889cbb1087073d5ea5c7ff5da1680cfa09cb30c8ba3897"} Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.217424 5136 scope.go:117] "RemoveContainer" containerID="2d4dba2ff1549dc6f8aaa33906c467cbc9d1690e7ac80aacdc997a2f55cc4025" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.222350 5136 generic.go:334] "Generic (PLEG): container finished" podID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerID="123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857" exitCode=0 Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.222384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" event={"ID":"1a566282-9a27-4172-b5ba-574e0179cfc4","Type":"ContainerDied","Data":"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857"} Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.222415 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.222420 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" event={"ID":"1a566282-9a27-4172-b5ba-574e0179cfc4","Type":"ContainerDied","Data":"df39f87d48bdc4108cfbbd23c050e3dcecc77d5d9cf9eff9e81e1a0106f177c3"} Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.224790 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c390cc35-103e-4376-a377-789d27e92301" (UID: "c390cc35-103e-4376-a377-789d27e92301"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.245884 5136 scope.go:117] "RemoveContainer" containerID="f46b7933668706885051cf2336daf80ae5d49b441259e450e3a49c583c6aa84a" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.268758 5136 scope.go:117] "RemoveContainer" containerID="881e29f20587338b4b26358412017a32346fc442b6e12fb3a66b43ae0eca1b2a" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.271902 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.277582 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.280799 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.291954 5136 scope.go:117] "RemoveContainer" containerID="123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.313685 5136 scope.go:117] "RemoveContainer" containerID="123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857" Mar 20 06:54:26 crc kubenswrapper[5136]: E0320 06:54:26.314391 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857\": container with ID starting with 123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857 not found: ID does not exist" containerID="123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.314426 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857"} err="failed to get container status \"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857\": rpc error: code = NotFound desc = could not find container \"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857\": container with ID starting with 123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857 not found: ID does not exist" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.404295 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" path="/var/lib/kubelet/pods/1a566282-9a27-4172-b5ba-574e0179cfc4/volumes" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.543794 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.548211 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.696489 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.696922 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerName="controller-manager" containerID="cri-o://489bdb392a45bad16e501fa0a0fb2098e4fc47999ee7306501643766f25cd617" gracePeriod=30 Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.717793 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.718097 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" podUID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" containerName="route-controller-manager" containerID="cri-o://73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f" gracePeriod=30 Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.224042 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.228055 5136 generic.go:334] "Generic (PLEG): container finished" podID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerID="489bdb392a45bad16e501fa0a0fb2098e4fc47999ee7306501643766f25cd617" exitCode=0 Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.228092 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" event={"ID":"cada42b5-7a5d-47d5-84e7-6c5612db1132","Type":"ContainerDied","Data":"489bdb392a45bad16e501fa0a0fb2098e4fc47999ee7306501643766f25cd617"} Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.231418 5136 generic.go:334] "Generic (PLEG): container finished" podID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" containerID="73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f" exitCode=0 Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.231455 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.231461 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" event={"ID":"6a3b5dcf-bcd1-4502-88f8-50c39af7e940","Type":"ContainerDied","Data":"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f"} Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.231512 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" event={"ID":"6a3b5dcf-bcd1-4502-88f8-50c39af7e940","Type":"ContainerDied","Data":"6a315e78b7f2d9cc8f4d33cb5e467ab336b28cd1fd4260458044c4145ecd3b51"} Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.231535 5136 scope.go:117] "RemoveContainer" containerID="73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.246122 5136 scope.go:117] "RemoveContainer" containerID="73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f" Mar 20 06:54:27 crc kubenswrapper[5136]: E0320 06:54:27.246558 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f\": container with ID starting with 73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f not found: ID does not exist" containerID="73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.246638 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f"} err="failed to get container status \"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f\": rpc error: code = NotFound desc = could not find container \"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f\": container with ID starting with 73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f not found: ID does not exist" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.298227 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert\") pod \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.298314 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config\") pod \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.298390 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-759tv\" (UniqueName: \"kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv\") pod \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.298416 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca\") pod \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.301639 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a3b5dcf-bcd1-4502-88f8-50c39af7e940" (UID: "6a3b5dcf-bcd1-4502-88f8-50c39af7e940"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.309568 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config" (OuterVolumeSpecName: "config") pod "6a3b5dcf-bcd1-4502-88f8-50c39af7e940" (UID: "6a3b5dcf-bcd1-4502-88f8-50c39af7e940"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.310012 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.310032 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.311931 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv" (OuterVolumeSpecName: "kube-api-access-759tv") pod "6a3b5dcf-bcd1-4502-88f8-50c39af7e940" (UID: "6a3b5dcf-bcd1-4502-88f8-50c39af7e940"). InnerVolumeSpecName "kube-api-access-759tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.315744 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.316586 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a3b5dcf-bcd1-4502-88f8-50c39af7e940" (UID: "6a3b5dcf-bcd1-4502-88f8-50c39af7e940"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411120 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles\") pod \"cada42b5-7a5d-47d5-84e7-6c5612db1132\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411434 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config\") pod \"cada42b5-7a5d-47d5-84e7-6c5612db1132\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411474 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqzq5\" (UniqueName: \"kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5\") pod \"cada42b5-7a5d-47d5-84e7-6c5612db1132\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411496 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca\") pod \"cada42b5-7a5d-47d5-84e7-6c5612db1132\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411551 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert\") pod \"cada42b5-7a5d-47d5-84e7-6c5612db1132\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411730 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-759tv\" (UniqueName: \"kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411742 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.412597 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cada42b5-7a5d-47d5-84e7-6c5612db1132" (UID: "cada42b5-7a5d-47d5-84e7-6c5612db1132"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.412631 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config" (OuterVolumeSpecName: "config") pod "cada42b5-7a5d-47d5-84e7-6c5612db1132" (UID: "cada42b5-7a5d-47d5-84e7-6c5612db1132"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.412764 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca" (OuterVolumeSpecName: "client-ca") pod "cada42b5-7a5d-47d5-84e7-6c5612db1132" (UID: "cada42b5-7a5d-47d5-84e7-6c5612db1132"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.414582 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5" (OuterVolumeSpecName: "kube-api-access-kqzq5") pod "cada42b5-7a5d-47d5-84e7-6c5612db1132" (UID: "cada42b5-7a5d-47d5-84e7-6c5612db1132"). InnerVolumeSpecName "kube-api-access-kqzq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.415338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cada42b5-7a5d-47d5-84e7-6c5612db1132" (UID: "cada42b5-7a5d-47d5-84e7-6c5612db1132"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.512877 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.512918 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.512934 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.512947 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqzq5\" (UniqueName: \"kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.512959 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.560354 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.562937 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.236773 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" event={"ID":"cada42b5-7a5d-47d5-84e7-6c5612db1132","Type":"ContainerDied","Data":"3d3b4204912003a659f2d723f3606e1a424a5ae09bdc6ee3049fa1cfb5307c75"} Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.236827 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.236859 5136 scope.go:117] "RemoveContainer" containerID="489bdb392a45bad16e501fa0a0fb2098e4fc47999ee7306501643766f25cd617" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.261504 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.263724 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.406037 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" path="/var/lib/kubelet/pods/6a3b5dcf-bcd1-4502-88f8-50c39af7e940/volumes" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.406510 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c390cc35-103e-4376-a377-789d27e92301" path="/var/lib/kubelet/pods/c390cc35-103e-4376-a377-789d27e92301/volumes" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.407267 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" path="/var/lib/kubelet/pods/cada42b5-7a5d-47d5-84e7-6c5612db1132/volumes" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.542812 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543034 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543045 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543097 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543104 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543112 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" containerName="route-controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543120 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" containerName="route-controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543128 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543134 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543145 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543151 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543159 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543165 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543173 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543178 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543186 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerName="controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543191 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerName="controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543199 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543204 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543218 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerName="oauth-openshift" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543225 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerName="oauth-openshift" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543235 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543241 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543247 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543252 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543261 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543268 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543275 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543280 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543290 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543295 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543378 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543387 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerName="controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543395 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543405 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerName="oauth-openshift" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543412 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543419 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" containerName="route-controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543427 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543779 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.545653 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.547634 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.548307 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.552964 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.553087 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.553394 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.554158 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.554270 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.554659 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.554692 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.554913 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.555026 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.555108 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.555307 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.573587 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.580000 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.590425 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.626949 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.626998 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627022 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627045 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdbfl\" (UniqueName: \"kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627101 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6g4\" (UniqueName: \"kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627130 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627154 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627178 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627219 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.727990 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728067 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728090 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728108 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728124 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdbfl\" (UniqueName: \"kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk6g4\" (UniqueName: \"kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728206 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728274 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728389 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.729464 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.729524 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.729690 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.730424 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.731516 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.734455 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.736349 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.743275 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdbfl\" (UniqueName: \"kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.743478 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk6g4\" (UniqueName: \"kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.871962 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.895174 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.124961 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:29 crc kubenswrapper[5136]: W0320 06:54:29.131944 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30eefe16_e27d_48ba_8ddc_6323d5ef7dff.slice/crio-3af454d0d61bd02a5123db5d460f020a2c15ffaa85dc6d97ea18da27415da35d WatchSource:0}: Error finding container 3af454d0d61bd02a5123db5d460f020a2c15ffaa85dc6d97ea18da27415da35d: Status 404 returned error can't find the container with id 3af454d0d61bd02a5123db5d460f020a2c15ffaa85dc6d97ea18da27415da35d Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.243790 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" event={"ID":"30eefe16-e27d-48ba-8ddc-6323d5ef7dff","Type":"ContainerStarted","Data":"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7"} Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.243842 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" event={"ID":"30eefe16-e27d-48ba-8ddc-6323d5ef7dff","Type":"ContainerStarted","Data":"3af454d0d61bd02a5123db5d460f020a2c15ffaa85dc6d97ea18da27415da35d"} Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.244598 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.246020 5136 patch_prober.go:28] interesting pod/route-controller-manager-55d77fd856-4bsxd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.246054 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.264098 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" podStartSLOduration=3.264078601 podStartE2EDuration="3.264078601s" podCreationTimestamp="2026-03-20 06:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:29.259063627 +0000 UTC m=+301.518374778" watchObservedRunningTime="2026-03-20 06:54:29.264078601 +0000 UTC m=+301.523389762" Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.289747 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:29 crc kubenswrapper[5136]: W0320 06:54:29.293615 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4e3e9f4_ed5f_49be_b08b_7d1c98d815e6.slice/crio-3f1c88baaf30595bdea8d4726c5ba10c496fd749769105c29ab6885acc9bedde WatchSource:0}: Error finding container 3f1c88baaf30595bdea8d4726c5ba10c496fd749769105c29ab6885acc9bedde: Status 404 returned error can't find the container with id 3f1c88baaf30595bdea8d4726c5ba10c496fd749769105c29ab6885acc9bedde Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.254099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" event={"ID":"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6","Type":"ContainerStarted","Data":"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f"} Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.254562 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" event={"ID":"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6","Type":"ContainerStarted","Data":"3f1c88baaf30595bdea8d4726c5ba10c496fd749769105c29ab6885acc9bedde"} Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.259261 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.280176 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" podStartSLOduration=4.280146897 podStartE2EDuration="4.280146897s" podCreationTimestamp="2026-03-20 06:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:30.274921786 +0000 UTC m=+302.534232997" watchObservedRunningTime="2026-03-20 06:54:30.280146897 +0000 UTC m=+302.539458078" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.547104 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-8449b79ffb-pfnv9"] Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.548931 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.560746 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.563198 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.563564 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.563222 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.564114 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.565081 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.565207 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.565424 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.566312 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.568044 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.568559 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.568598 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.576584 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.590201 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.598273 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8449b79ffb-pfnv9"] Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.600532 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651348 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651408 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdv9n\" (UniqueName: \"kubernetes.io/projected/57f31029-60e4-4bcb-a75a-c88030d19563-kube-api-access-gdv9n\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651440 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-error\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651485 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651506 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651591 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-login\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651714 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651773 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-audit-policies\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651966 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-session\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.652010 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-router-certs\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.652060 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57f31029-60e4-4bcb-a75a-c88030d19563-audit-dir\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.652085 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.652186 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-service-ca\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.652225 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.754369 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdv9n\" (UniqueName: \"kubernetes.io/projected/57f31029-60e4-4bcb-a75a-c88030d19563-kube-api-access-gdv9n\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.754534 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-error\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.754621 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.754710 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.754802 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-login\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755002 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755093 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-audit-policies\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755250 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-session\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-router-certs\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755443 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57f31029-60e4-4bcb-a75a-c88030d19563-audit-dir\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755530 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755653 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57f31029-60e4-4bcb-a75a-c88030d19563-audit-dir\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.756039 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-audit-policies\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.756867 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.757045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-service-ca\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.757139 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.757229 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.757297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.759187 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-service-ca\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.761567 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-router-certs\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.761915 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762431 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-error\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762456 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762494 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762508 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-session\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762533 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-login\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762729 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.773696 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdv9n\" (UniqueName: \"kubernetes.io/projected/57f31029-60e4-4bcb-a75a-c88030d19563-kube-api-access-gdv9n\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.891846 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:31 crc kubenswrapper[5136]: I0320 06:54:31.264394 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:31 crc kubenswrapper[5136]: I0320 06:54:31.275205 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:31 crc kubenswrapper[5136]: I0320 06:54:31.325882 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8449b79ffb-pfnv9"] Mar 20 06:54:32 crc kubenswrapper[5136]: I0320 06:54:32.291999 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" event={"ID":"57f31029-60e4-4bcb-a75a-c88030d19563","Type":"ContainerStarted","Data":"0f40371f46b1b2ef90f7cc703e304d506b3ee56f33f2580a78a85415e4e90a6d"} Mar 20 06:54:32 crc kubenswrapper[5136]: I0320 06:54:32.292599 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" event={"ID":"57f31029-60e4-4bcb-a75a-c88030d19563","Type":"ContainerStarted","Data":"de1932609b7e537e1144511b2a0d2be95af97dfc2105c7beaa4aef1194ea5606"} Mar 20 06:54:32 crc kubenswrapper[5136]: I0320 06:54:32.318257 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" podStartSLOduration=32.318235305 podStartE2EDuration="32.318235305s" podCreationTimestamp="2026-03-20 06:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:32.317305914 +0000 UTC m=+304.576617075" watchObservedRunningTime="2026-03-20 06:54:32.318235305 +0000 UTC m=+304.577546466" Mar 20 06:54:33 crc kubenswrapper[5136]: I0320 06:54:33.295956 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:33 crc kubenswrapper[5136]: I0320 06:54:33.301593 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:45 crc kubenswrapper[5136]: I0320 06:54:45.822148 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:54:45 crc kubenswrapper[5136]: I0320 06:54:45.822716 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:54:45 crc kubenswrapper[5136]: I0320 06:54:45.822767 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:54:45 crc kubenswrapper[5136]: I0320 06:54:45.823461 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 06:54:45 crc kubenswrapper[5136]: I0320 06:54:45.823525 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9" gracePeriod=600 Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.364986 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9" exitCode=0 Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.365094 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9"} Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.365374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13"} Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.720399 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.720629 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" podUID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" containerName="controller-manager" containerID="cri-o://9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f" gracePeriod=30 Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.809343 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.809540 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerName="route-controller-manager" containerID="cri-o://247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7" gracePeriod=30 Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.271519 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.275590 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.374442 5136 generic.go:334] "Generic (PLEG): container finished" podID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerID="247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7" exitCode=0 Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.374559 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.374892 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" event={"ID":"30eefe16-e27d-48ba-8ddc-6323d5ef7dff","Type":"ContainerDied","Data":"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7"} Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.374970 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" event={"ID":"30eefe16-e27d-48ba-8ddc-6323d5ef7dff","Type":"ContainerDied","Data":"3af454d0d61bd02a5123db5d460f020a2c15ffaa85dc6d97ea18da27415da35d"} Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.375003 5136 scope.go:117] "RemoveContainer" containerID="247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.377220 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" containerID="9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f" exitCode=0 Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.377258 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.377261 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" event={"ID":"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6","Type":"ContainerDied","Data":"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f"} Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.377296 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" event={"ID":"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6","Type":"ContainerDied","Data":"3f1c88baaf30595bdea8d4726c5ba10c496fd749769105c29ab6885acc9bedde"} Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.392668 5136 scope.go:117] "RemoveContainer" containerID="247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7" Mar 20 06:54:47 crc kubenswrapper[5136]: E0320 06:54:47.393092 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7\": container with ID starting with 247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7 not found: ID does not exist" containerID="247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.393125 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7"} err="failed to get container status \"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7\": rpc error: code = NotFound desc = could not find container \"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7\": container with ID starting with 247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7 not found: ID does not exist" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.393149 5136 scope.go:117] "RemoveContainer" containerID="9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.406681 5136 scope.go:117] "RemoveContainer" containerID="9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f" Mar 20 06:54:47 crc kubenswrapper[5136]: E0320 06:54:47.407234 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f\": container with ID starting with 9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f not found: ID does not exist" containerID="9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.407279 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f"} err="failed to get container status \"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f\": rpc error: code = NotFound desc = could not find container \"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f\": container with ID starting with 9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f not found: ID does not exist" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422615 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert\") pod \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422662 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca\") pod \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422685 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config\") pod \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422705 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles\") pod \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422732 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca\") pod \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422756 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config\") pod \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422773 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdbfl\" (UniqueName: \"kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl\") pod \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422803 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert\") pod \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422877 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk6g4\" (UniqueName: \"kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4\") pod \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.423657 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" (UID: "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.423670 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" (UID: "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.423719 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config" (OuterVolumeSpecName: "config") pod "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" (UID: "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.423851 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config" (OuterVolumeSpecName: "config") pod "30eefe16-e27d-48ba-8ddc-6323d5ef7dff" (UID: "30eefe16-e27d-48ba-8ddc-6323d5ef7dff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.424122 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca" (OuterVolumeSpecName: "client-ca") pod "30eefe16-e27d-48ba-8ddc-6323d5ef7dff" (UID: "30eefe16-e27d-48ba-8ddc-6323d5ef7dff"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.428554 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" (UID: "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.429209 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4" (OuterVolumeSpecName: "kube-api-access-kk6g4") pod "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" (UID: "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6"). InnerVolumeSpecName "kube-api-access-kk6g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.434573 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl" (OuterVolumeSpecName: "kube-api-access-jdbfl") pod "30eefe16-e27d-48ba-8ddc-6323d5ef7dff" (UID: "30eefe16-e27d-48ba-8ddc-6323d5ef7dff"). InnerVolumeSpecName "kube-api-access-jdbfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.435134 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "30eefe16-e27d-48ba-8ddc-6323d5ef7dff" (UID: "30eefe16-e27d-48ba-8ddc-6323d5ef7dff"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524268 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524298 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524307 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524316 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524325 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524333 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524341 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdbfl\" (UniqueName: \"kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524349 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524392 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk6g4\" (UniqueName: \"kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.706565 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.711174 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.717964 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.720886 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.403974 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" path="/var/lib/kubelet/pods/30eefe16-e27d-48ba-8ddc-6323d5ef7dff/volumes" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.405049 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" path="/var/lib/kubelet/pods/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6/volumes" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.557533 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl"] Mar 20 06:54:48 crc kubenswrapper[5136]: E0320 06:54:48.557795 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" containerName="controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.557807 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" containerName="controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: E0320 06:54:48.557831 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerName="route-controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.557836 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerName="route-controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.557933 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" containerName="controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.557946 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerName="route-controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.558330 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.560229 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5488b6b747-g82fl"] Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.560886 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.561178 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.561326 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.561571 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.561625 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.561720 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.562107 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.567758 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.568233 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.568301 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.568240 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.569405 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.569594 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.571649 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl"] Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.574011 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5488b6b747-g82fl"] Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.574767 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735748 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-client-ca\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735779 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5d77e9-afc8-4189-8d8a-94b71989f364-serving-cert\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735798 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-client-ca\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735836 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-proxy-ca-bundles\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735860 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984bz\" (UniqueName: \"kubernetes.io/projected/3149bced-bf2c-43ac-aec3-407029760012-kube-api-access-984bz\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735946 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3149bced-bf2c-43ac-aec3-407029760012-serving-cert\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.736003 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-config\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.736040 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-config\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.736063 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrbtf\" (UniqueName: \"kubernetes.io/projected/8f5d77e9-afc8-4189-8d8a-94b71989f364-kube-api-access-vrbtf\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.837723 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3149bced-bf2c-43ac-aec3-407029760012-serving-cert\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.837785 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-config\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.837846 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-config\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.837876 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrbtf\" (UniqueName: \"kubernetes.io/projected/8f5d77e9-afc8-4189-8d8a-94b71989f364-kube-api-access-vrbtf\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.837994 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-client-ca\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.838019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5d77e9-afc8-4189-8d8a-94b71989f364-serving-cert\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.838041 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-client-ca\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.838095 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-proxy-ca-bundles\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.838127 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-984bz\" (UniqueName: \"kubernetes.io/projected/3149bced-bf2c-43ac-aec3-407029760012-kube-api-access-984bz\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.839237 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-client-ca\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.839283 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-config\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.839587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-config\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.839706 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-proxy-ca-bundles\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.839976 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-client-ca\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.843986 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5d77e9-afc8-4189-8d8a-94b71989f364-serving-cert\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.844872 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3149bced-bf2c-43ac-aec3-407029760012-serving-cert\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.861110 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrbtf\" (UniqueName: \"kubernetes.io/projected/8f5d77e9-afc8-4189-8d8a-94b71989f364-kube-api-access-vrbtf\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.867337 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-984bz\" (UniqueName: \"kubernetes.io/projected/3149bced-bf2c-43ac-aec3-407029760012-kube-api-access-984bz\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.877260 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.884578 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.094748 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl"] Mar 20 06:54:49 crc kubenswrapper[5136]: W0320 06:54:49.120928 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f5d77e9_afc8_4189_8d8a_94b71989f364.slice/crio-1672daf04981930abd8b3b7f7208a20e079d6d2d506738ed65439d9d36662e5c WatchSource:0}: Error finding container 1672daf04981930abd8b3b7f7208a20e079d6d2d506738ed65439d9d36662e5c: Status 404 returned error can't find the container with id 1672daf04981930abd8b3b7f7208a20e079d6d2d506738ed65439d9d36662e5c Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.391607 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" event={"ID":"8f5d77e9-afc8-4189-8d8a-94b71989f364","Type":"ContainerStarted","Data":"b0180e2b5726b9f21fbe130daf7a93bc5222589cdf942788449c3a0bf5214f06"} Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.391668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" event={"ID":"8f5d77e9-afc8-4189-8d8a-94b71989f364","Type":"ContainerStarted","Data":"1672daf04981930abd8b3b7f7208a20e079d6d2d506738ed65439d9d36662e5c"} Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.391991 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.420723 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" podStartSLOduration=3.4207032330000002 podStartE2EDuration="3.420703233s" podCreationTimestamp="2026-03-20 06:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:49.408927527 +0000 UTC m=+321.668238678" watchObservedRunningTime="2026-03-20 06:54:49.420703233 +0000 UTC m=+321.680014384" Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.421125 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5488b6b747-g82fl"] Mar 20 06:54:49 crc kubenswrapper[5136]: W0320 06:54:49.424518 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3149bced_bf2c_43ac_aec3_407029760012.slice/crio-674abe5085bba65b1ed3e19075049b42c0e5802333499f037765de605d3fb23d WatchSource:0}: Error finding container 674abe5085bba65b1ed3e19075049b42c0e5802333499f037765de605d3fb23d: Status 404 returned error can't find the container with id 674abe5085bba65b1ed3e19075049b42c0e5802333499f037765de605d3fb23d Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.651267 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:50 crc kubenswrapper[5136]: I0320 06:54:50.401542 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:50 crc kubenswrapper[5136]: I0320 06:54:50.401576 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" event={"ID":"3149bced-bf2c-43ac-aec3-407029760012","Type":"ContainerStarted","Data":"34b417e13f114a9ab9bc9cbea0eff9bc390c97b2d971b73b275a465b73e249a7"} Mar 20 06:54:50 crc kubenswrapper[5136]: I0320 06:54:50.401592 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" event={"ID":"3149bced-bf2c-43ac-aec3-407029760012","Type":"ContainerStarted","Data":"674abe5085bba65b1ed3e19075049b42c0e5802333499f037765de605d3fb23d"} Mar 20 06:54:50 crc kubenswrapper[5136]: I0320 06:54:50.403326 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:50 crc kubenswrapper[5136]: I0320 06:54:50.453127 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" podStartSLOduration=4.453108886 podStartE2EDuration="4.453108886s" podCreationTimestamp="2026-03-20 06:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:50.431690193 +0000 UTC m=+322.691001344" watchObservedRunningTime="2026-03-20 06:54:50.453108886 +0000 UTC m=+322.712420037" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.893641 5136 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.894882 5136 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.894996 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895175 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080" gracePeriod=15 Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895241 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477" gracePeriod=15 Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895311 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc" gracePeriod=15 Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895343 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f" gracePeriod=15 Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895428 5136 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.895880 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895914 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895337 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa" gracePeriod=15 Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.895931 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896027 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896049 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896062 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896084 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896095 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896108 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896118 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896133 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896142 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896156 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896164 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896172 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896181 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896304 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896315 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896327 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896338 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896354 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896366 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896380 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896540 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896555 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896577 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896588 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896734 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.897038 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058316 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058620 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058680 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058773 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058831 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058865 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058966 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159702 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159742 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159777 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159845 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159870 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159936 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159934 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159959 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160022 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160043 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159928 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159969 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160065 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160152 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160163 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160294 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.428622 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.430156 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.430911 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477" exitCode=0 Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.430967 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa" exitCode=0 Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.430979 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f" exitCode=0 Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.430992 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc" exitCode=2 Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.431026 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.438887 5136 generic.go:334] "Generic (PLEG): container finished" podID="84671130-5991-4032-964a-01c61fefc56a" containerID="41fd895caa40c1a2b0a6165ebaa7bf7c883b138febb47e401574b2b95cc9077c" exitCode=0 Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.438922 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84671130-5991-4032-964a-01c61fefc56a","Type":"ContainerDied","Data":"41fd895caa40c1a2b0a6165ebaa7bf7c883b138febb47e401574b2b95cc9077c"} Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.439614 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.448829 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.834717 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.835235 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.994768 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock\") pod \"84671130-5991-4032-964a-01c61fefc56a\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.995186 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir\") pod \"84671130-5991-4032-964a-01c61fefc56a\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.995257 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access\") pod \"84671130-5991-4032-964a-01c61fefc56a\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:57.999679 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock" (OuterVolumeSpecName: "var-lock") pod "84671130-5991-4032-964a-01c61fefc56a" (UID: "84671130-5991-4032-964a-01c61fefc56a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:57.999742 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84671130-5991-4032-964a-01c61fefc56a" (UID: "84671130-5991-4032-964a-01c61fefc56a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.007707 5136 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.014149 5136 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.039039 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84671130-5991-4032-964a-01c61fefc56a" (UID: "84671130-5991-4032-964a-01c61fefc56a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.115257 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.254762 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.256104 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.256799 5136 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.257131 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317355 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317413 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317498 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317539 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317609 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317675 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317898 5136 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317910 5136 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317918 5136 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.398862 5136 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.399090 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.403978 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.457954 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.458579 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080" exitCode=0 Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.458674 5136 scope.go:117] "RemoveContainer" containerID="d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.458693 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.459491 5136 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.459929 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.460265 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84671130-5991-4032-964a-01c61fefc56a","Type":"ContainerDied","Data":"a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0"} Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.460294 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.460380 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.462207 5136 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.462524 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.463455 5136 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.463838 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.475141 5136 scope.go:117] "RemoveContainer" containerID="5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.488594 5136 scope.go:117] "RemoveContainer" containerID="98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.504117 5136 scope.go:117] "RemoveContainer" containerID="086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.518602 5136 scope.go:117] "RemoveContainer" containerID="430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.533261 5136 scope.go:117] "RemoveContainer" containerID="92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.550527 5136 scope.go:117] "RemoveContainer" containerID="d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.550942 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\": container with ID starting with d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477 not found: ID does not exist" containerID="d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.550978 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477"} err="failed to get container status \"d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\": rpc error: code = NotFound desc = could not find container \"d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\": container with ID starting with d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477 not found: ID does not exist" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551004 5136 scope.go:117] "RemoveContainer" containerID="5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.551263 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\": container with ID starting with 5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa not found: ID does not exist" containerID="5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551315 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa"} err="failed to get container status \"5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\": rpc error: code = NotFound desc = could not find container \"5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\": container with ID starting with 5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa not found: ID does not exist" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551349 5136 scope.go:117] "RemoveContainer" containerID="98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.551616 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\": container with ID starting with 98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f not found: ID does not exist" containerID="98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551653 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f"} err="failed to get container status \"98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\": rpc error: code = NotFound desc = could not find container \"98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\": container with ID starting with 98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f not found: ID does not exist" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551677 5136 scope.go:117] "RemoveContainer" containerID="086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.551959 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\": container with ID starting with 086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc not found: ID does not exist" containerID="086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551993 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc"} err="failed to get container status \"086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\": rpc error: code = NotFound desc = could not find container \"086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\": container with ID starting with 086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc not found: ID does not exist" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.552015 5136 scope.go:117] "RemoveContainer" containerID="430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.552242 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\": container with ID starting with 430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080 not found: ID does not exist" containerID="430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.552278 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080"} err="failed to get container status \"430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\": rpc error: code = NotFound desc = could not find container \"430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\": container with ID starting with 430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080 not found: ID does not exist" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.552305 5136 scope.go:117] "RemoveContainer" containerID="92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.552554 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\": container with ID starting with 92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0 not found: ID does not exist" containerID="92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.552587 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0"} err="failed to get container status \"92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\": rpc error: code = NotFound desc = could not find container \"92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\": container with ID starting with 92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0 not found: ID does not exist" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.352149 5136 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.353061 5136 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.353585 5136 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.354012 5136 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.354427 5136 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:59 crc kubenswrapper[5136]: I0320 06:54:59.354486 5136 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.354954 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.556236 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.957209 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Mar 20 06:55:00 crc kubenswrapper[5136]: E0320 06:55:00.757797 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Mar 20 06:55:00 crc kubenswrapper[5136]: E0320 06:55:00.931509 5136 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:55:00 crc kubenswrapper[5136]: I0320 06:55:00.932133 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:55:00 crc kubenswrapper[5136]: E0320 06:55:00.981017 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189e7a3be9daf387 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:55:00.975653767 +0000 UTC m=+333.234964948,LastTimestamp:2026-03-20 06:55:00.975653767 +0000 UTC m=+333.234964948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:55:01 crc kubenswrapper[5136]: I0320 06:55:01.482653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528"} Mar 20 06:55:01 crc kubenswrapper[5136]: I0320 06:55:01.482703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"95d24620254c3f60d5c7bc369c215909ecbb4fc8aadf715024c7ab1ece7f1ef1"} Mar 20 06:55:01 crc kubenswrapper[5136]: E0320 06:55:01.483292 5136 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:55:01 crc kubenswrapper[5136]: I0320 06:55:01.483501 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:55:02 crc kubenswrapper[5136]: E0320 06:55:02.358798 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Mar 20 06:55:02 crc kubenswrapper[5136]: I0320 06:55:02.467133 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:55:02 crc kubenswrapper[5136]: I0320 06:55:02.467208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:55:02 crc kubenswrapper[5136]: I0320 06:55:02.467315 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:55:02 crc kubenswrapper[5136]: I0320 06:55:02.467359 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:55:02 crc kubenswrapper[5136]: W0320 06:55:02.468366 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:02 crc kubenswrapper[5136]: W0320 06:55:02.468387 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:02 crc kubenswrapper[5136]: E0320 06:55:02.468436 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:02 crc kubenswrapper[5136]: E0320 06:55:02.468496 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:02 crc kubenswrapper[5136]: W0320 06:55:02.468375 5136 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:02 crc kubenswrapper[5136]: E0320 06:55:02.468602 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.468584 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.468746 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.468670 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.468950 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:57:05.468921637 +0000 UTC m=+457.728232828 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.468715 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.469013 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:57:05.469000099 +0000 UTC m=+457.728311280 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: W0320 06:55:03.469556 5136 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.469639 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:04 crc kubenswrapper[5136]: W0320 06:55:04.217229 5136 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.217538 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469728 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469764 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469792 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469839 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469923 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:57:06.469890137 +0000 UTC m=+458.729201318 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469952 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:57:06.469938639 +0000 UTC m=+458.729249830 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:05 crc kubenswrapper[5136]: W0320 06:55:05.319214 5136 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:05 crc kubenswrapper[5136]: E0320 06:55:05.319294 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:05 crc kubenswrapper[5136]: W0320 06:55:05.325711 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:05 crc kubenswrapper[5136]: E0320 06:55:05.326013 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:05 crc kubenswrapper[5136]: W0320 06:55:05.518277 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:05 crc kubenswrapper[5136]: E0320 06:55:05.518343 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:05 crc kubenswrapper[5136]: E0320 06:55:05.560760 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="6.4s" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.396461 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.401287 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.402797 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.424236 5136 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.424280 5136 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:08 crc kubenswrapper[5136]: E0320 06:55:08.424891 5136 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.425646 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:08 crc kubenswrapper[5136]: W0320 06:55:08.455050 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ddd4937024c47ced41cf5247b2d45e9606abee030a043c30ed646d61dd30fc46 WatchSource:0}: Error finding container ddd4937024c47ced41cf5247b2d45e9606abee030a043c30ed646d61dd30fc46: Status 404 returned error can't find the container with id ddd4937024c47ced41cf5247b2d45e9606abee030a043c30ed646d61dd30fc46 Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.526939 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ddd4937024c47ced41cf5247b2d45e9606abee030a043c30ed646d61dd30fc46"} Mar 20 06:55:08 crc kubenswrapper[5136]: W0320 06:55:08.690581 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:08 crc kubenswrapper[5136]: E0320 06:55:08.690658 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:09 crc kubenswrapper[5136]: E0320 06:55:09.155449 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189e7a3be9daf387 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:55:00.975653767 +0000 UTC m=+333.234964948,LastTimestamp:2026-03-20 06:55:00.975653767 +0000 UTC m=+333.234964948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:55:09 crc kubenswrapper[5136]: W0320 06:55:09.324222 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:09 crc kubenswrapper[5136]: E0320 06:55:09.324582 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:09 crc kubenswrapper[5136]: I0320 06:55:09.536047 5136 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="88b0cbe92c9f0fe76ecab0aa146da4e3461a1d5f219123646bdf76ea34d956bf" exitCode=0 Mar 20 06:55:09 crc kubenswrapper[5136]: I0320 06:55:09.536110 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"88b0cbe92c9f0fe76ecab0aa146da4e3461a1d5f219123646bdf76ea34d956bf"} Mar 20 06:55:09 crc kubenswrapper[5136]: I0320 06:55:09.536501 5136 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:09 crc kubenswrapper[5136]: I0320 06:55:09.536537 5136 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:09 crc kubenswrapper[5136]: E0320 06:55:09.537170 5136 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:09 crc kubenswrapper[5136]: I0320 06:55:09.537370 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.544773 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.550532 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.550590 5136 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b" exitCode=1 Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.550661 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b"} Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.551282 5136 scope.go:117] "RemoveContainer" containerID="e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b" Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.556597 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"61226ca0c085a343c95b09c9b819acaaa4ba1895031b0a680e5bfd2098f9f7e2"} Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.556644 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"21a1179f8567f2b618fbadcb6ed8b409d1bc09df00d1a5a193de4fe073f9ff6c"} Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.556659 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cf6591953a1289a1ff78c791a6c1925653ce1499d8d4867190d4b19a8483a46c"} Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.563499 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.564939 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.565015 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"26f0afdf3d65ac7e25470e077a90b43302807523c1a95f7a25c6bdc1282e76fb"} Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.567397 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f861ccd3228a0112c42d58802873e29995142f050dab275f773ed0d441231a89"} Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.567434 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"33de0156831f4e0c973f4efb890c2f77371bc68b881e4ab95945898fa7c40b1f"} Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.567541 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.567580 5136 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.567595 5136 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:12 crc kubenswrapper[5136]: I0320 06:55:12.284250 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:55:12 crc kubenswrapper[5136]: I0320 06:55:12.288429 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:55:12 crc kubenswrapper[5136]: I0320 06:55:12.574190 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:55:13 crc kubenswrapper[5136]: I0320 06:55:13.425799 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:13 crc kubenswrapper[5136]: I0320 06:55:13.426232 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:13 crc kubenswrapper[5136]: I0320 06:55:13.431274 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:16 crc kubenswrapper[5136]: I0320 06:55:16.267099 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 20 06:55:16 crc kubenswrapper[5136]: I0320 06:55:16.267234 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 20 06:55:16 crc kubenswrapper[5136]: I0320 06:55:16.577275 5136 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:16 crc kubenswrapper[5136]: I0320 06:55:16.588938 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 20 06:55:16 crc kubenswrapper[5136]: I0320 06:55:16.928162 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 20 06:55:17 crc kubenswrapper[5136]: I0320 06:55:17.601349 5136 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:17 crc kubenswrapper[5136]: I0320 06:55:17.601904 5136 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:17 crc kubenswrapper[5136]: I0320 06:55:17.604946 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:18 crc kubenswrapper[5136]: I0320 06:55:18.412928 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f8a2d5f4-1753-4860-bb3b-523d24d7c10a" Mar 20 06:55:18 crc kubenswrapper[5136]: I0320 06:55:18.605448 5136 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:18 crc kubenswrapper[5136]: I0320 06:55:18.605480 5136 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:18 crc kubenswrapper[5136]: I0320 06:55:18.608584 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f8a2d5f4-1753-4860-bb3b-523d24d7c10a" Mar 20 06:55:23 crc kubenswrapper[5136]: E0320 06:55:23.418247 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:55:23 crc kubenswrapper[5136]: E0320 06:55:23.431193 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:55:23 crc kubenswrapper[5136]: E0320 06:55:23.439288 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:55:25 crc kubenswrapper[5136]: I0320 06:55:25.951045 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.339343 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.412528 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.615773 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.654678 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.757480 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.879972 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 20 06:55:27 crc kubenswrapper[5136]: I0320 06:55:27.446333 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 20 06:55:27 crc kubenswrapper[5136]: I0320 06:55:27.492191 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 20 06:55:27 crc kubenswrapper[5136]: I0320 06:55:27.886232 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 20 06:55:27 crc kubenswrapper[5136]: I0320 06:55:27.933489 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.059897 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.196852 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.480402 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.941498 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.966146 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.999332 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.019377 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.021038 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.031661 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.047678 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.299506 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.414561 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.472743 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.493568 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.576833 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.609654 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.653643 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.695602 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.861737 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.897109 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.111755 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.127990 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.181154 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.342595 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.582897 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.798785 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.827124 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.968358 5136 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.984473 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.032293 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.040247 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.046002 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.073541 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.083584 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.146476 5136 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.184458 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.224416 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.435468 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.529927 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.549905 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.582893 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.654285 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.664022 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.691946 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.696052 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.696061 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.697217 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.731127 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.782056 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.889633 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.929767 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.930054 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.935619 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.303649 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.349356 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.353516 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.387716 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.537467 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.537986 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.590947 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.665633 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.703702 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.713724 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.003537 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.059652 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.105341 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.125175 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.127219 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.377310 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.560641 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.569627 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.584791 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.616174 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.708585 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.710049 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.810026 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.919198 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.930005 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.944843 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.981279 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.004179 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.108957 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.119215 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.133098 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.145411 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.147300 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.184636 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.252434 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.316473 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.321113 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.378692 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.391540 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.395807 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.515566 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.518378 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.595487 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.601119 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.693751 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.715897 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.908336 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.974362 5136 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.020511 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.031829 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.093943 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.171911 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.546894 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.547994 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.569599 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.678337 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.682413 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.770448 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.945709 5136 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.021723 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.115967 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.143476 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.161539 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.213436 5136 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.214051 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.218188 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.218235 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.222552 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.243313 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.255610 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.255590746 podStartE2EDuration="20.255590746s" podCreationTimestamp="2026-03-20 06:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:55:36.234299475 +0000 UTC m=+368.493610646" watchObservedRunningTime="2026-03-20 06:55:36.255590746 +0000 UTC m=+368.514901887" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.274250 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.280507 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.351081 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.364218 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.375156 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.394489 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.394552 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.395625 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.454043 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.461292 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.560098 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.642463 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.664838 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.704042 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.704363 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.756196 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.788050 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.823194 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.871726 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.899228 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.942538 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.135837 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.160627 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.307158 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.313800 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.369801 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.405315 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.426667 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.429927 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.447496 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.448998 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.482834 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.562448 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.595412 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.676653 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.750790 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.750804 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.752518 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.871116 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.092646 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.108803 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.129545 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.226745 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.248780 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.256470 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.272723 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.304227 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.344917 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.358727 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.373717 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.396380 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.406296 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.464831 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.503219 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.608595 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.683421 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.724990 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.751696 5136 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.807879 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.850351 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.946399 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.978032 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.984027 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.985942 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.027097 5136 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.027313 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528" gracePeriod=5 Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.045967 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.082302 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.088127 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.110201 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.253235 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.292320 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.338432 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.448357 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.470106 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.518376 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.519722 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.553348 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.560499 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.631084 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.634029 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.650311 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.918194 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.960506 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.042720 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.108894 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.166239 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.356021 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.433893 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.465754 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.482146 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.493336 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.581745 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.594297 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.618426 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.692227 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.710425 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.733701 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.747478 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.795634 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.844602 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.920200 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.031499 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.123994 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.159063 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.159678 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.177315 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.346073 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.423772 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.468910 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.596935 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.730265 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.931132 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 20 06:55:42 crc kubenswrapper[5136]: I0320 06:55:42.163449 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 20 06:55:42 crc kubenswrapper[5136]: I0320 06:55:42.463540 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 20 06:55:42 crc kubenswrapper[5136]: I0320 06:55:42.502717 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 20 06:55:42 crc kubenswrapper[5136]: I0320 06:55:42.516863 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 20 06:55:42 crc kubenswrapper[5136]: I0320 06:55:42.706793 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 20 06:55:43 crc kubenswrapper[5136]: I0320 06:55:43.443454 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 20 06:55:43 crc kubenswrapper[5136]: I0320 06:55:43.521699 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 20 06:55:43 crc kubenswrapper[5136]: I0320 06:55:43.560600 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 20 06:55:43 crc kubenswrapper[5136]: I0320 06:55:43.693375 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.547540 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.602657 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.602728 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.681535 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.746852 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.746895 5136 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528" exitCode=137 Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.746934 5136 scope.go:117] "RemoveContainer" containerID="bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.747009 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749576 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749606 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749632 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749671 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749728 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749740 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749798 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749802 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749935 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.750073 5136 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.750087 5136 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.750097 5136 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.750105 5136 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.761255 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.764684 5136 scope.go:117] "RemoveContainer" containerID="bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528" Mar 20 06:55:44 crc kubenswrapper[5136]: E0320 06:55:44.765170 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528\": container with ID starting with bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528 not found: ID does not exist" containerID="bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.765233 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528"} err="failed to get container status \"bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528\": rpc error: code = NotFound desc = could not find container \"bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528\": container with ID starting with bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528 not found: ID does not exist" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.852037 5136 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:55:46 crc kubenswrapper[5136]: I0320 06:55:46.407568 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 20 06:55:58 crc kubenswrapper[5136]: I0320 06:55:58.839093 5136 generic.go:334] "Generic (PLEG): container finished" podID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerID="83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a" exitCode=0 Mar 20 06:55:58 crc kubenswrapper[5136]: I0320 06:55:58.839208 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerDied","Data":"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a"} Mar 20 06:55:58 crc kubenswrapper[5136]: I0320 06:55:58.839962 5136 scope.go:117] "RemoveContainer" containerID="83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a" Mar 20 06:55:59 crc kubenswrapper[5136]: I0320 06:55:59.847780 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerStarted","Data":"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7"} Mar 20 06:55:59 crc kubenswrapper[5136]: I0320 06:55:59.848559 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:55:59 crc kubenswrapper[5136]: I0320 06:55:59.850643 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154061 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566496-kkbk6"] Mar 20 06:56:00 crc kubenswrapper[5136]: E0320 06:56:00.154288 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84671130-5991-4032-964a-01c61fefc56a" containerName="installer" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154303 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="84671130-5991-4032-964a-01c61fefc56a" containerName="installer" Mar 20 06:56:00 crc kubenswrapper[5136]: E0320 06:56:00.154329 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154336 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154467 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154480 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="84671130-5991-4032-964a-01c61fefc56a" containerName="installer" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154916 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.156621 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.156979 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.161220 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.162659 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566496-kkbk6"] Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.294341 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz79z\" (UniqueName: \"kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z\") pod \"auto-csr-approver-29566496-kkbk6\" (UID: \"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb\") " pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.399717 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz79z\" (UniqueName: \"kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z\") pod \"auto-csr-approver-29566496-kkbk6\" (UID: \"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb\") " pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.424665 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz79z\" (UniqueName: \"kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z\") pod \"auto-csr-approver-29566496-kkbk6\" (UID: \"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb\") " pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.474378 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.868416 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566496-kkbk6"] Mar 20 06:56:00 crc kubenswrapper[5136]: W0320 06:56:00.884082 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7123c3cf_7f09_4f1f_a99f_b5a3a27c54eb.slice/crio-0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd WatchSource:0}: Error finding container 0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd: Status 404 returned error can't find the container with id 0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd Mar 20 06:56:01 crc kubenswrapper[5136]: I0320 06:56:01.862254 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" event={"ID":"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb","Type":"ContainerStarted","Data":"0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd"} Mar 20 06:56:02 crc kubenswrapper[5136]: I0320 06:56:02.869619 5136 generic.go:334] "Generic (PLEG): container finished" podID="7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" containerID="65e785eb1dbd67ac0fede3f7a5dc27c137ef39fb9832a3755e23a954eb908065" exitCode=0 Mar 20 06:56:02 crc kubenswrapper[5136]: I0320 06:56:02.869667 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" event={"ID":"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb","Type":"ContainerDied","Data":"65e785eb1dbd67ac0fede3f7a5dc27c137ef39fb9832a3755e23a954eb908065"} Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.187581 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.342188 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz79z\" (UniqueName: \"kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z\") pod \"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb\" (UID: \"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb\") " Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.348007 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z" (OuterVolumeSpecName: "kube-api-access-sz79z") pod "7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" (UID: "7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb"). InnerVolumeSpecName "kube-api-access-sz79z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.443621 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz79z\" (UniqueName: \"kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.888443 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" event={"ID":"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb","Type":"ContainerDied","Data":"0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd"} Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.889009 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd" Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.889106 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.208432 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.209387 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gnspw" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="registry-server" containerID="cri-o://5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780" gracePeriod=30 Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.215040 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.215356 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hjck6" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="registry-server" containerID="cri-o://061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1" gracePeriod=30 Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.227672 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.227917 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" containerID="cri-o://a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7" gracePeriod=30 Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.237374 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.237621 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zvjw4" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="registry-server" containerID="cri-o://52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5" gracePeriod=30 Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.254130 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl2lb"] Mar 20 06:56:28 crc kubenswrapper[5136]: E0320 06:56:28.254448 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" containerName="oc" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.254464 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" containerName="oc" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.254571 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" containerName="oc" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.255096 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.263081 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.263651 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w76x4" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="registry-server" containerID="cri-o://96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8" gracePeriod=30 Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.276310 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl2lb"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.361324 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.361512 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7lq\" (UniqueName: \"kubernetes.io/projected/37de93ad-331e-41ee-8f74-523100e01b09-kube-api-access-kd7lq\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.361629 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.463252 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7lq\" (UniqueName: \"kubernetes.io/projected/37de93ad-331e-41ee-8f74-523100e01b09-kube-api-access-kd7lq\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.463337 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.463400 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.465595 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.469290 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.480354 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7lq\" (UniqueName: \"kubernetes.io/projected/37de93ad-331e-41ee-8f74-523100e01b09-kube-api-access-kd7lq\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.638458 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.661861 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.766538 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities\") pod \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.766665 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqscr\" (UniqueName: \"kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr\") pod \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.766723 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content\") pod \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.767403 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities" (OuterVolumeSpecName: "utilities") pod "899bb83b-4a95-49e5-8e8f-50c309b5d5e1" (UID: "899bb83b-4a95-49e5-8e8f-50c309b5d5e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.775947 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr" (OuterVolumeSpecName: "kube-api-access-mqscr") pod "899bb83b-4a95-49e5-8e8f-50c309b5d5e1" (UID: "899bb83b-4a95-49e5-8e8f-50c309b5d5e1"). InnerVolumeSpecName "kube-api-access-mqscr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.790390 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.825883 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.837889 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.845122 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "899bb83b-4a95-49e5-8e8f-50c309b5d5e1" (UID: "899bb83b-4a95-49e5-8e8f-50c309b5d5e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874339 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content\") pod \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874407 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4p96\" (UniqueName: \"kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96\") pod \"289bd2af-981a-4da9-af4b-77ef6fd7e526\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874433 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics\") pod \"289bd2af-981a-4da9-af4b-77ef6fd7e526\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874531 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content\") pod \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874572 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities\") pod \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874648 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca\") pod \"289bd2af-981a-4da9-af4b-77ef6fd7e526\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874688 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs57w\" (UniqueName: \"kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w\") pod \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874739 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities\") pod \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874778 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9jhq\" (UniqueName: \"kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq\") pod \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.875314 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.875343 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.875359 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqscr\" (UniqueName: \"kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.876349 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities" (OuterVolumeSpecName: "utilities") pod "ff9e0ea6-add4-4087-83a6-f8d85588d6f2" (UID: "ff9e0ea6-add4-4087-83a6-f8d85588d6f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.885719 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "289bd2af-981a-4da9-af4b-77ef6fd7e526" (UID: "289bd2af-981a-4da9-af4b-77ef6fd7e526"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.886042 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities" (OuterVolumeSpecName: "utilities") pod "8a3a1d9c-1870-4a43-95fb-6d07e5619acb" (UID: "8a3a1d9c-1870-4a43-95fb-6d07e5619acb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.892223 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w" (OuterVolumeSpecName: "kube-api-access-bs57w") pod "8a3a1d9c-1870-4a43-95fb-6d07e5619acb" (UID: "8a3a1d9c-1870-4a43-95fb-6d07e5619acb"). InnerVolumeSpecName "kube-api-access-bs57w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.900982 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq" (OuterVolumeSpecName: "kube-api-access-t9jhq") pod "ff9e0ea6-add4-4087-83a6-f8d85588d6f2" (UID: "ff9e0ea6-add4-4087-83a6-f8d85588d6f2"). InnerVolumeSpecName "kube-api-access-t9jhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.901356 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96" (OuterVolumeSpecName: "kube-api-access-s4p96") pod "289bd2af-981a-4da9-af4b-77ef6fd7e526" (UID: "289bd2af-981a-4da9-af4b-77ef6fd7e526"). InnerVolumeSpecName "kube-api-access-s4p96". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.923018 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "289bd2af-981a-4da9-af4b-77ef6fd7e526" (UID: "289bd2af-981a-4da9-af4b-77ef6fd7e526"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.924049 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.934803 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a3a1d9c-1870-4a43-95fb-6d07e5619acb" (UID: "8a3a1d9c-1870-4a43-95fb-6d07e5619acb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976185 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content\") pod \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976231 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities\") pod \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976280 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnvxm\" (UniqueName: \"kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm\") pod \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976456 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976472 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9jhq\" (UniqueName: \"kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976483 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976491 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4p96\" (UniqueName: \"kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976500 5136 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976507 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976515 5136 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976524 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs57w\" (UniqueName: \"kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.978970 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl2lb"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.979710 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm" (OuterVolumeSpecName: "kube-api-access-jnvxm") pod "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" (UID: "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5"). InnerVolumeSpecName "kube-api-access-jnvxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.979713 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities" (OuterVolumeSpecName: "utilities") pod "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" (UID: "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.004144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" (UID: "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.015729 5136 generic.go:334] "Generic (PLEG): container finished" podID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerID="52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5" exitCode=0 Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.015783 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerDied","Data":"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.015808 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerDied","Data":"20ff1b017eb3949453a5c8e4e818cc934899998a077617b8ba9ac7f5655008d9"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.015846 5136 scope.go:117] "RemoveContainer" containerID="52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.015947 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.023027 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" event={"ID":"37de93ad-331e-41ee-8f74-523100e01b09","Type":"ContainerStarted","Data":"368a61c7f48a01e7f5d4b69cf2321dc8dd8fda7ffb3f54cbdec66ed733ae02af"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.025015 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerID="96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8" exitCode=0 Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.025064 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerDied","Data":"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.025083 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerDied","Data":"b157f4d8fc06233ee1c508206beda933efb32ecebe1735837a9ebedb70d95894"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.025096 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.026604 5136 generic.go:334] "Generic (PLEG): container finished" podID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerID="a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7" exitCode=0 Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.026650 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerDied","Data":"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.026666 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerDied","Data":"8ab9396d1b0bd00b43015624038265fccd12c5928575d3620513f24c6d495ec3"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.026700 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.028949 5136 generic.go:334] "Generic (PLEG): container finished" podID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerID="5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780" exitCode=0 Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.029016 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.029032 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerDied","Data":"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.030789 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerDied","Data":"3e121a671baa07140a3d1cad1e8e105a436e8a55fb9911545361353494c2ebed"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.033478 5136 generic.go:334] "Generic (PLEG): container finished" podID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerID="061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1" exitCode=0 Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.033506 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerDied","Data":"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.033524 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerDied","Data":"a72e48c3682399c05912ec6fcf4bd3347709282c92c6d1cf4cee81749234bee6"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.033590 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.036544 5136 scope.go:117] "RemoveContainer" containerID="dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.072315 5136 scope.go:117] "RemoveContainer" containerID="e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.073036 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff9e0ea6-add4-4087-83a6-f8d85588d6f2" (UID: "ff9e0ea6-add4-4087-83a6-f8d85588d6f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.076626 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.080004 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.080169 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.080179 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.080188 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnvxm\" (UniqueName: \"kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.084249 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.089890 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.092954 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.096228 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.101301 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.104840 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.106662 5136 scope.go:117] "RemoveContainer" containerID="52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.107070 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5\": container with ID starting with 52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5 not found: ID does not exist" containerID="52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.107112 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5"} err="failed to get container status \"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5\": rpc error: code = NotFound desc = could not find container \"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5\": container with ID starting with 52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.107139 5136 scope.go:117] "RemoveContainer" containerID="dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.107827 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968\": container with ID starting with dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968 not found: ID does not exist" containerID="dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.107855 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968"} err="failed to get container status \"dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968\": rpc error: code = NotFound desc = could not find container \"dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968\": container with ID starting with dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.107888 5136 scope.go:117] "RemoveContainer" containerID="e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.108086 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.108120 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d\": container with ID starting with e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d not found: ID does not exist" containerID="e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.108141 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d"} err="failed to get container status \"e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d\": rpc error: code = NotFound desc = could not find container \"e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d\": container with ID starting with e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.108156 5136 scope.go:117] "RemoveContainer" containerID="96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.119950 5136 scope.go:117] "RemoveContainer" containerID="aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.140856 5136 scope.go:117] "RemoveContainer" containerID="0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.152036 5136 scope.go:117] "RemoveContainer" containerID="96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.152476 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8\": container with ID starting with 96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8 not found: ID does not exist" containerID="96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.152514 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8"} err="failed to get container status \"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8\": rpc error: code = NotFound desc = could not find container \"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8\": container with ID starting with 96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.152540 5136 scope.go:117] "RemoveContainer" containerID="aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.152891 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784\": container with ID starting with aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784 not found: ID does not exist" containerID="aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.152912 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784"} err="failed to get container status \"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784\": rpc error: code = NotFound desc = could not find container \"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784\": container with ID starting with aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.152927 5136 scope.go:117] "RemoveContainer" containerID="0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.153174 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675\": container with ID starting with 0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675 not found: ID does not exist" containerID="0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.153193 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675"} err="failed to get container status \"0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675\": rpc error: code = NotFound desc = could not find container \"0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675\": container with ID starting with 0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.153205 5136 scope.go:117] "RemoveContainer" containerID="a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.169719 5136 scope.go:117] "RemoveContainer" containerID="83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.181538 5136 scope.go:117] "RemoveContainer" containerID="a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.181835 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7\": container with ID starting with a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7 not found: ID does not exist" containerID="a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.181860 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7"} err="failed to get container status \"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7\": rpc error: code = NotFound desc = could not find container \"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7\": container with ID starting with a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.181880 5136 scope.go:117] "RemoveContainer" containerID="83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.182298 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a\": container with ID starting with 83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a not found: ID does not exist" containerID="83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.182316 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a"} err="failed to get container status \"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a\": rpc error: code = NotFound desc = could not find container \"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a\": container with ID starting with 83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.182329 5136 scope.go:117] "RemoveContainer" containerID="5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.200077 5136 scope.go:117] "RemoveContainer" containerID="53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.214172 5136 scope.go:117] "RemoveContainer" containerID="ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.229404 5136 scope.go:117] "RemoveContainer" containerID="5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.229717 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780\": container with ID starting with 5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780 not found: ID does not exist" containerID="5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.229745 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780"} err="failed to get container status \"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780\": rpc error: code = NotFound desc = could not find container \"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780\": container with ID starting with 5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.229766 5136 scope.go:117] "RemoveContainer" containerID="53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.230051 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71\": container with ID starting with 53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71 not found: ID does not exist" containerID="53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.230074 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71"} err="failed to get container status \"53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71\": rpc error: code = NotFound desc = could not find container \"53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71\": container with ID starting with 53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.230087 5136 scope.go:117] "RemoveContainer" containerID="ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.230607 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958\": container with ID starting with ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958 not found: ID does not exist" containerID="ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.230630 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958"} err="failed to get container status \"ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958\": rpc error: code = NotFound desc = could not find container \"ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958\": container with ID starting with ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.230644 5136 scope.go:117] "RemoveContainer" containerID="061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.243766 5136 scope.go:117] "RemoveContainer" containerID="61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.258216 5136 scope.go:117] "RemoveContainer" containerID="eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.269766 5136 scope.go:117] "RemoveContainer" containerID="061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.270171 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1\": container with ID starting with 061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1 not found: ID does not exist" containerID="061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.270228 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1"} err="failed to get container status \"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1\": rpc error: code = NotFound desc = could not find container \"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1\": container with ID starting with 061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.270256 5136 scope.go:117] "RemoveContainer" containerID="61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.270706 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c\": container with ID starting with 61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c not found: ID does not exist" containerID="61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.270756 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c"} err="failed to get container status \"61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c\": rpc error: code = NotFound desc = could not find container \"61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c\": container with ID starting with 61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.270786 5136 scope.go:117] "RemoveContainer" containerID="eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.271080 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5\": container with ID starting with eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5 not found: ID does not exist" containerID="eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.271115 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5"} err="failed to get container status \"eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5\": rpc error: code = NotFound desc = could not find container \"eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5\": container with ID starting with eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.391603 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.395705 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822063 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2mt29"] Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822302 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822320 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822336 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822345 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822359 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822367 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822378 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822386 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822398 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822405 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822415 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822423 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822431 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822438 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822449 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822456 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822467 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822474 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822484 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822494 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822505 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822514 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822523 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822531 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822540 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822547 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822558 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822565 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822660 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822677 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822687 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822696 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822709 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822724 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.823390 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.826487 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.839310 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mt29"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.888374 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mxtm\" (UniqueName: \"kubernetes.io/projected/75cf71d1-5e27-4089-bf58-1f389690d498-kube-api-access-5mxtm\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.888412 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-catalog-content\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.888457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-utilities\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.989927 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mxtm\" (UniqueName: \"kubernetes.io/projected/75cf71d1-5e27-4089-bf58-1f389690d498-kube-api-access-5mxtm\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.989964 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-catalog-content\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.990011 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-utilities\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.990444 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-utilities\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.990529 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-catalog-content\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.008536 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mxtm\" (UniqueName: \"kubernetes.io/projected/75cf71d1-5e27-4089-bf58-1f389690d498-kube-api-access-5mxtm\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.044313 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" event={"ID":"37de93ad-331e-41ee-8f74-523100e01b09","Type":"ContainerStarted","Data":"497d3e5714638801946f1bf0cc90a9285aff4aae2f8254e31294466b1d477b9d"} Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.044514 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.048484 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.064541 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" podStartSLOduration=2.064517502 podStartE2EDuration="2.064517502s" podCreationTimestamp="2026-03-20 06:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:56:30.060280618 +0000 UTC m=+422.319591829" watchObservedRunningTime="2026-03-20 06:56:30.064517502 +0000 UTC m=+422.323828653" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.141558 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.402401 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" path="/var/lib/kubelet/pods/289bd2af-981a-4da9-af4b-77ef6fd7e526/volumes" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.402891 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" path="/var/lib/kubelet/pods/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5/volumes" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.403419 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" path="/var/lib/kubelet/pods/899bb83b-4a95-49e5-8e8f-50c309b5d5e1/volumes" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.405089 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" path="/var/lib/kubelet/pods/8a3a1d9c-1870-4a43-95fb-6d07e5619acb/volumes" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.405686 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" path="/var/lib/kubelet/pods/ff9e0ea6-add4-4087-83a6-f8d85588d6f2/volumes" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.574256 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mt29"] Mar 20 06:56:30 crc kubenswrapper[5136]: W0320 06:56:30.588158 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75cf71d1_5e27_4089_bf58_1f389690d498.slice/crio-9276c6d954614cf23e9c04e52e57b2a253765182465a22b173d3ca0eb4de1b14 WatchSource:0}: Error finding container 9276c6d954614cf23e9c04e52e57b2a253765182465a22b173d3ca0eb4de1b14: Status 404 returned error can't find the container with id 9276c6d954614cf23e9c04e52e57b2a253765182465a22b173d3ca0eb4de1b14 Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.052734 5136 generic.go:334] "Generic (PLEG): container finished" podID="75cf71d1-5e27-4089-bf58-1f389690d498" containerID="a89de7a7de765351b0517f7ba5e755d0c47d37d4861e0172d7a8a1a982e27464" exitCode=0 Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.052852 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mt29" event={"ID":"75cf71d1-5e27-4089-bf58-1f389690d498","Type":"ContainerDied","Data":"a89de7a7de765351b0517f7ba5e755d0c47d37d4861e0172d7a8a1a982e27464"} Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.052892 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mt29" event={"ID":"75cf71d1-5e27-4089-bf58-1f389690d498","Type":"ContainerStarted","Data":"9276c6d954614cf23e9c04e52e57b2a253765182465a22b173d3ca0eb4de1b14"} Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.231720 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zsmxp"] Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.233222 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.236051 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.245582 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zsmxp"] Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.306174 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-utilities\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.306226 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-catalog-content\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.306423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rkrd\" (UniqueName: \"kubernetes.io/projected/2d0ba076-45a3-4e99-80de-774db592dfc5-kube-api-access-2rkrd\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.407981 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-utilities\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.408045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-catalog-content\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.408092 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rkrd\" (UniqueName: \"kubernetes.io/projected/2d0ba076-45a3-4e99-80de-774db592dfc5-kube-api-access-2rkrd\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.408617 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-utilities\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.408870 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-catalog-content\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.427423 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rkrd\" (UniqueName: \"kubernetes.io/projected/2d0ba076-45a3-4e99-80de-774db592dfc5-kube-api-access-2rkrd\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.547019 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.003996 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zsmxp"] Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.060698 5136 generic.go:334] "Generic (PLEG): container finished" podID="75cf71d1-5e27-4089-bf58-1f389690d498" containerID="0dddf0b2897b3d23c02e5ddfb86042209415e1483cbb8aba0921474934ac5aa3" exitCode=0 Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.060765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mt29" event={"ID":"75cf71d1-5e27-4089-bf58-1f389690d498","Type":"ContainerDied","Data":"0dddf0b2897b3d23c02e5ddfb86042209415e1483cbb8aba0921474934ac5aa3"} Mar 20 06:56:32 crc kubenswrapper[5136]: W0320 06:56:32.068315 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d0ba076_45a3_4e99_80de_774db592dfc5.slice/crio-6d03f66bcb4fa51f7956526d0ae9669da1cf34482f214d7252c1106f9ad093c6 WatchSource:0}: Error finding container 6d03f66bcb4fa51f7956526d0ae9669da1cf34482f214d7252c1106f9ad093c6: Status 404 returned error can't find the container with id 6d03f66bcb4fa51f7956526d0ae9669da1cf34482f214d7252c1106f9ad093c6 Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.233287 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-598hk"] Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.234303 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.236180 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.243369 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-598hk"] Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.318476 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-catalog-content\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.318741 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-utilities\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.318864 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92nb6\" (UniqueName: \"kubernetes.io/projected/6b1bb4bc-89fb-4965-892b-8db898976bc0-kube-api-access-92nb6\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.419808 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-utilities\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.419882 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92nb6\" (UniqueName: \"kubernetes.io/projected/6b1bb4bc-89fb-4965-892b-8db898976bc0-kube-api-access-92nb6\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.419959 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-catalog-content\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.420444 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-catalog-content\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.420452 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-utilities\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.438000 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92nb6\" (UniqueName: \"kubernetes.io/projected/6b1bb4bc-89fb-4965-892b-8db898976bc0-kube-api-access-92nb6\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.592174 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.847747 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-598hk"] Mar 20 06:56:32 crc kubenswrapper[5136]: W0320 06:56:32.855771 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b1bb4bc_89fb_4965_892b_8db898976bc0.slice/crio-c19648ccc57110e321d7cbc4973c13b1a90f7580234bc3c58cf75da56b1995c5 WatchSource:0}: Error finding container c19648ccc57110e321d7cbc4973c13b1a90f7580234bc3c58cf75da56b1995c5: Status 404 returned error can't find the container with id c19648ccc57110e321d7cbc4973c13b1a90f7580234bc3c58cf75da56b1995c5 Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.066096 5136 generic.go:334] "Generic (PLEG): container finished" podID="2d0ba076-45a3-4e99-80de-774db592dfc5" containerID="a5d57dbb13a1f3757cd37f7bc62263340d3ac4a3fbfb8f9cd07f6d492e39d36c" exitCode=0 Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.066150 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsmxp" event={"ID":"2d0ba076-45a3-4e99-80de-774db592dfc5","Type":"ContainerDied","Data":"a5d57dbb13a1f3757cd37f7bc62263340d3ac4a3fbfb8f9cd07f6d492e39d36c"} Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.066490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsmxp" event={"ID":"2d0ba076-45a3-4e99-80de-774db592dfc5","Type":"ContainerStarted","Data":"6d03f66bcb4fa51f7956526d0ae9669da1cf34482f214d7252c1106f9ad093c6"} Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.068133 5136 generic.go:334] "Generic (PLEG): container finished" podID="6b1bb4bc-89fb-4965-892b-8db898976bc0" containerID="3046189ba49b0be310b4ce25e92c6fe1a1c7b873323c95bc0bd11b6f05f13f89" exitCode=0 Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.068189 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-598hk" event={"ID":"6b1bb4bc-89fb-4965-892b-8db898976bc0","Type":"ContainerDied","Data":"3046189ba49b0be310b4ce25e92c6fe1a1c7b873323c95bc0bd11b6f05f13f89"} Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.068207 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-598hk" event={"ID":"6b1bb4bc-89fb-4965-892b-8db898976bc0","Type":"ContainerStarted","Data":"c19648ccc57110e321d7cbc4973c13b1a90f7580234bc3c58cf75da56b1995c5"} Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.070202 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mt29" event={"ID":"75cf71d1-5e27-4089-bf58-1f389690d498","Type":"ContainerStarted","Data":"d0f97766edc28a9379c3e8e1b7e5e01489814145224110af63c2cfe44a717a7a"} Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.129688 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2mt29" podStartSLOduration=2.724341967 podStartE2EDuration="4.129669493s" podCreationTimestamp="2026-03-20 06:56:29 +0000 UTC" firstStartedPulling="2026-03-20 06:56:31.054456303 +0000 UTC m=+423.313767464" lastFinishedPulling="2026-03-20 06:56:32.459783819 +0000 UTC m=+424.719094990" observedRunningTime="2026-03-20 06:56:33.126230515 +0000 UTC m=+425.385541696" watchObservedRunningTime="2026-03-20 06:56:33.129669493 +0000 UTC m=+425.388980664" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.625182 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.626694 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.630978 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.638293 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.743296 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.743333 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6x6v\" (UniqueName: \"kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.743391 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.844221 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.844258 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6x6v\" (UniqueName: \"kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.844290 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.844869 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.845095 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.862425 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6x6v\" (UniqueName: \"kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.972901 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:34 crc kubenswrapper[5136]: I0320 06:56:34.129702 5136 generic.go:334] "Generic (PLEG): container finished" podID="6b1bb4bc-89fb-4965-892b-8db898976bc0" containerID="4dadae52a17622bdc271e4cd40b7b8d7159ca5417b3f02335d28ed4031d547ac" exitCode=0 Mar 20 06:56:34 crc kubenswrapper[5136]: I0320 06:56:34.129769 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-598hk" event={"ID":"6b1bb4bc-89fb-4965-892b-8db898976bc0","Type":"ContainerDied","Data":"4dadae52a17622bdc271e4cd40b7b8d7159ca5417b3f02335d28ed4031d547ac"} Mar 20 06:56:34 crc kubenswrapper[5136]: I0320 06:56:34.139087 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsmxp" event={"ID":"2d0ba076-45a3-4e99-80de-774db592dfc5","Type":"ContainerStarted","Data":"2b7f572170075bb33521c97b19da5ee4153208d19babee031c6a18cb7c553929"} Mar 20 06:56:34 crc kubenswrapper[5136]: I0320 06:56:34.443195 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 06:56:34 crc kubenswrapper[5136]: W0320 06:56:34.513790 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1d2d341_1694_4f55_860a_46b11bac80c8.slice/crio-a5974c459c2386be53440ce7c343a53cc05081c5b95afdcf1296d6de5d8a6e98 WatchSource:0}: Error finding container a5974c459c2386be53440ce7c343a53cc05081c5b95afdcf1296d6de5d8a6e98: Status 404 returned error can't find the container with id a5974c459c2386be53440ce7c343a53cc05081c5b95afdcf1296d6de5d8a6e98 Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.146569 5136 generic.go:334] "Generic (PLEG): container finished" podID="2d0ba076-45a3-4e99-80de-774db592dfc5" containerID="2b7f572170075bb33521c97b19da5ee4153208d19babee031c6a18cb7c553929" exitCode=0 Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.146657 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsmxp" event={"ID":"2d0ba076-45a3-4e99-80de-774db592dfc5","Type":"ContainerDied","Data":"2b7f572170075bb33521c97b19da5ee4153208d19babee031c6a18cb7c553929"} Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.149206 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-598hk" event={"ID":"6b1bb4bc-89fb-4965-892b-8db898976bc0","Type":"ContainerStarted","Data":"8d80bfba3b0e6cbf81f375aa1672b68e1ada30d3cafa144104d761bf21887186"} Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.150981 5136 generic.go:334] "Generic (PLEG): container finished" podID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerID="6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c" exitCode=0 Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.151023 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerDied","Data":"6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c"} Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.151052 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerStarted","Data":"a5974c459c2386be53440ce7c343a53cc05081c5b95afdcf1296d6de5d8a6e98"} Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.181749 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-598hk" podStartSLOduration=1.7075786549999998 podStartE2EDuration="3.18173008s" podCreationTimestamp="2026-03-20 06:56:32 +0000 UTC" firstStartedPulling="2026-03-20 06:56:33.069696434 +0000 UTC m=+425.329007585" lastFinishedPulling="2026-03-20 06:56:34.543847859 +0000 UTC m=+426.803159010" observedRunningTime="2026-03-20 06:56:35.178503498 +0000 UTC m=+427.437814649" watchObservedRunningTime="2026-03-20 06:56:35.18173008 +0000 UTC m=+427.441041221" Mar 20 06:56:36 crc kubenswrapper[5136]: I0320 06:56:36.168388 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsmxp" event={"ID":"2d0ba076-45a3-4e99-80de-774db592dfc5","Type":"ContainerStarted","Data":"c9843d8b2bef9509e80941a197c366215ec3619457851006774f5d5e3c7a883a"} Mar 20 06:56:36 crc kubenswrapper[5136]: I0320 06:56:36.171322 5136 generic.go:334] "Generic (PLEG): container finished" podID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerID="6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697" exitCode=0 Mar 20 06:56:36 crc kubenswrapper[5136]: I0320 06:56:36.171379 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerDied","Data":"6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697"} Mar 20 06:56:36 crc kubenswrapper[5136]: I0320 06:56:36.194525 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zsmxp" podStartSLOduration=2.707804163 podStartE2EDuration="5.194509015s" podCreationTimestamp="2026-03-20 06:56:31 +0000 UTC" firstStartedPulling="2026-03-20 06:56:33.067233136 +0000 UTC m=+425.326544287" lastFinishedPulling="2026-03-20 06:56:35.553937968 +0000 UTC m=+427.813249139" observedRunningTime="2026-03-20 06:56:36.192273373 +0000 UTC m=+428.451584524" watchObservedRunningTime="2026-03-20 06:56:36.194509015 +0000 UTC m=+428.453820166" Mar 20 06:56:37 crc kubenswrapper[5136]: I0320 06:56:37.178403 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerStarted","Data":"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871"} Mar 20 06:56:37 crc kubenswrapper[5136]: I0320 06:56:37.199408 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qfgkr" podStartSLOduration=2.779295786 podStartE2EDuration="4.199391339s" podCreationTimestamp="2026-03-20 06:56:33 +0000 UTC" firstStartedPulling="2026-03-20 06:56:35.152806885 +0000 UTC m=+427.412118056" lastFinishedPulling="2026-03-20 06:56:36.572902458 +0000 UTC m=+428.832213609" observedRunningTime="2026-03-20 06:56:37.195887698 +0000 UTC m=+429.455198849" watchObservedRunningTime="2026-03-20 06:56:37.199391339 +0000 UTC m=+429.458702510" Mar 20 06:56:40 crc kubenswrapper[5136]: I0320 06:56:40.142224 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:40 crc kubenswrapper[5136]: I0320 06:56:40.142564 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:40 crc kubenswrapper[5136]: I0320 06:56:40.185046 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:40 crc kubenswrapper[5136]: I0320 06:56:40.231013 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:41 crc kubenswrapper[5136]: I0320 06:56:41.548010 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:41 crc kubenswrapper[5136]: I0320 06:56:41.548055 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:42 crc kubenswrapper[5136]: I0320 06:56:42.592407 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:42 crc kubenswrapper[5136]: I0320 06:56:42.592725 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:42 crc kubenswrapper[5136]: I0320 06:56:42.603281 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zsmxp" podUID="2d0ba076-45a3-4e99-80de-774db592dfc5" containerName="registry-server" probeResult="failure" output=< Mar 20 06:56:42 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 06:56:42 crc kubenswrapper[5136]: > Mar 20 06:56:42 crc kubenswrapper[5136]: I0320 06:56:42.627861 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:43 crc kubenswrapper[5136]: I0320 06:56:43.247804 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:43 crc kubenswrapper[5136]: I0320 06:56:43.974046 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:43 crc kubenswrapper[5136]: I0320 06:56:43.974125 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:44 crc kubenswrapper[5136]: I0320 06:56:44.009198 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:44 crc kubenswrapper[5136]: I0320 06:56:44.257686 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.307115 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8s5gx"] Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.307947 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.325712 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8s5gx"] Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505684 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-trusted-ca\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505724 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505745 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-registry-certificates\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505771 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/31623b1c-0c30-4654-99eb-3919c754586a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505794 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqgk5\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-kube-api-access-kqgk5\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505984 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-bound-sa-token\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.506052 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-registry-tls\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.506076 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/31623b1c-0c30-4654-99eb-3919c754586a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.530144 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.607314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-bound-sa-token\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.607697 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-registry-tls\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.607735 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/31623b1c-0c30-4654-99eb-3919c754586a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.607914 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-trusted-ca\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.607958 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-registry-certificates\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.608000 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/31623b1c-0c30-4654-99eb-3919c754586a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.608041 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqgk5\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-kube-api-access-kqgk5\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.609269 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/31623b1c-0c30-4654-99eb-3919c754586a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.609762 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-trusted-ca\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.610176 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-registry-certificates\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.613835 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-registry-tls\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.614033 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/31623b1c-0c30-4654-99eb-3919c754586a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.629523 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqgk5\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-kube-api-access-kqgk5\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.629916 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-bound-sa-token\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.925171 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:48 crc kubenswrapper[5136]: I0320 06:56:48.378888 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8s5gx"] Mar 20 06:56:48 crc kubenswrapper[5136]: W0320 06:56:48.384926 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31623b1c_0c30_4654_99eb_3919c754586a.slice/crio-eac31a64abff47046c960020b5922875604071a6549de598031c3bba01676145 WatchSource:0}: Error finding container eac31a64abff47046c960020b5922875604071a6549de598031c3bba01676145: Status 404 returned error can't find the container with id eac31a64abff47046c960020b5922875604071a6549de598031c3bba01676145 Mar 20 06:56:49 crc kubenswrapper[5136]: I0320 06:56:49.239415 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" event={"ID":"31623b1c-0c30-4654-99eb-3919c754586a","Type":"ContainerStarted","Data":"77fa2c2d30ffa27d0374c901c22e2e5cd184dfb4706fca48a4c737680738c21e"} Mar 20 06:56:49 crc kubenswrapper[5136]: I0320 06:56:49.239467 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" event={"ID":"31623b1c-0c30-4654-99eb-3919c754586a","Type":"ContainerStarted","Data":"eac31a64abff47046c960020b5922875604071a6549de598031c3bba01676145"} Mar 20 06:56:49 crc kubenswrapper[5136]: I0320 06:56:49.239719 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:49 crc kubenswrapper[5136]: I0320 06:56:49.271839 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" podStartSLOduration=2.271780141 podStartE2EDuration="2.271780141s" podCreationTimestamp="2026-03-20 06:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:56:49.263904352 +0000 UTC m=+441.523215503" watchObservedRunningTime="2026-03-20 06:56:49.271780141 +0000 UTC m=+441.531091302" Mar 20 06:56:51 crc kubenswrapper[5136]: I0320 06:56:51.593992 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:51 crc kubenswrapper[5136]: I0320 06:56:51.639784 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:57:05 crc kubenswrapper[5136]: I0320 06:57:05.574421 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:57:05 crc kubenswrapper[5136]: I0320 06:57:05.575145 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:57:05 crc kubenswrapper[5136]: I0320 06:57:05.577897 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:57:05 crc kubenswrapper[5136]: I0320 06:57:05.585865 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:57:05 crc kubenswrapper[5136]: I0320 06:57:05.796757 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.357711 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fd9f1a089c8d6f7ac25b5f394b4badef16e4b89051cb170c474d913e5d4f4145"} Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.358301 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8f64215c02d0190dc726f7bd61c5372cc40fcf94cfa7eebbe4909a83bdec1832"} Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.485774 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.485845 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.490928 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.491070 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.497234 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.601436 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:57:06 crc kubenswrapper[5136]: W0320 06:57:06.794043 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-44d6f20d22e242524d7127dfd8438e337de95cdb2421ae0a6e82b2b86757d090 WatchSource:0}: Error finding container 44d6f20d22e242524d7127dfd8438e337de95cdb2421ae0a6e82b2b86757d090: Status 404 returned error can't find the container with id 44d6f20d22e242524d7127dfd8438e337de95cdb2421ae0a6e82b2b86757d090 Mar 20 06:57:06 crc kubenswrapper[5136]: W0320 06:57:06.945664 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-fb32a5ab42245e1eb6d3b378c3ed82becd1a675731b0e6b4d0c133d4c4cf9537 WatchSource:0}: Error finding container fb32a5ab42245e1eb6d3b378c3ed82becd1a675731b0e6b4d0c133d4c4cf9537: Status 404 returned error can't find the container with id fb32a5ab42245e1eb6d3b378c3ed82becd1a675731b0e6b4d0c133d4c4cf9537 Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.364838 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"18aee57588522e6b9ee43fb6a19224614bfb0d260631248a7e48434f99a252a9"} Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.365251 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fb32a5ab42245e1eb6d3b378c3ed82becd1a675731b0e6b4d0c133d4c4cf9537"} Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.366212 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fd812e5f2b792b8d74b1c711bc124896bb0d6d54b891e3b006264acc68868e41"} Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.366234 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"44d6f20d22e242524d7127dfd8438e337de95cdb2421ae0a6e82b2b86757d090"} Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.366415 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.929855 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.993791 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:57:15 crc kubenswrapper[5136]: I0320 06:57:15.822584 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:57:15 crc kubenswrapper[5136]: I0320 06:57:15.823098 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.041599 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" podUID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" containerName="registry" containerID="cri-o://6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f" gracePeriod=30 Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.487357 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.538571 5136 generic.go:334] "Generic (PLEG): container finished" podID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" containerID="6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f" exitCode=0 Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.538630 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.538634 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" event={"ID":"16ee2b48-5dea-48c6-888a-ae52ff44afa4","Type":"ContainerDied","Data":"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f"} Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.538860 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" event={"ID":"16ee2b48-5dea-48c6-888a-ae52ff44afa4","Type":"ContainerDied","Data":"2150aeb737b2b0e0b3d8cc086d915c665b772b5d543fe941e2ca6a177c62b012"} Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.538913 5136 scope.go:117] "RemoveContainer" containerID="6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.556498 5136 scope.go:117] "RemoveContainer" containerID="6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f" Mar 20 06:57:33 crc kubenswrapper[5136]: E0320 06:57:33.557079 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f\": container with ID starting with 6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f not found: ID does not exist" containerID="6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.557139 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f"} err="failed to get container status \"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f\": rpc error: code = NotFound desc = could not find container \"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f\": container with ID starting with 6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f not found: ID does not exist" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.663664 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.663753 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.663869 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.663910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.664064 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.664176 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl98m\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.664215 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.664251 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.666086 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.666506 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.672271 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m" (OuterVolumeSpecName: "kube-api-access-kl98m") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "kube-api-access-kl98m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.672490 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.672861 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.676639 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.688596 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.698222 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765696 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl98m\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765734 5136 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765746 5136 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765758 5136 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765769 5136 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765781 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765792 5136 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.873118 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.877702 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:57:34 crc kubenswrapper[5136]: I0320 06:57:34.408846 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" path="/var/lib/kubelet/pods/16ee2b48-5dea-48c6-888a-ae52ff44afa4/volumes" Mar 20 06:57:36 crc kubenswrapper[5136]: I0320 06:57:36.607104 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:57:45 crc kubenswrapper[5136]: I0320 06:57:45.861170 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:57:45 crc kubenswrapper[5136]: I0320 06:57:45.862136 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.150568 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566498-pc964"] Mar 20 06:58:00 crc kubenswrapper[5136]: E0320 06:58:00.151884 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" containerName="registry" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.151918 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" containerName="registry" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.152180 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" containerName="registry" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.152990 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.159153 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.159528 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.160079 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.163972 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566498-pc964"] Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.347339 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvmv6\" (UniqueName: \"kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6\") pod \"auto-csr-approver-29566498-pc964\" (UID: \"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d\") " pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.449270 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvmv6\" (UniqueName: \"kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6\") pod \"auto-csr-approver-29566498-pc964\" (UID: \"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d\") " pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.485342 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvmv6\" (UniqueName: \"kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6\") pod \"auto-csr-approver-29566498-pc964\" (UID: \"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d\") " pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.489986 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.761519 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566498-pc964"] Mar 20 06:58:01 crc kubenswrapper[5136]: I0320 06:58:01.749273 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566498-pc964" event={"ID":"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d","Type":"ContainerStarted","Data":"18b14d417d86bb1444abe37f82fd2d88b81fafc542a3b9491fd4c2419eda43db"} Mar 20 06:58:02 crc kubenswrapper[5136]: I0320 06:58:02.756551 5136 generic.go:334] "Generic (PLEG): container finished" podID="96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" containerID="c00038ddb8710afcc46ffbe4488fc8393e21c46c51a56a16f3faef658211be51" exitCode=0 Mar 20 06:58:02 crc kubenswrapper[5136]: I0320 06:58:02.756610 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566498-pc964" event={"ID":"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d","Type":"ContainerDied","Data":"c00038ddb8710afcc46ffbe4488fc8393e21c46c51a56a16f3faef658211be51"} Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.085090 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.100542 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvmv6\" (UniqueName: \"kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6\") pod \"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d\" (UID: \"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d\") " Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.109782 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6" (OuterVolumeSpecName: "kube-api-access-wvmv6") pod "96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" (UID: "96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d"). InnerVolumeSpecName "kube-api-access-wvmv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.201641 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvmv6\" (UniqueName: \"kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6\") on node \"crc\" DevicePath \"\"" Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.771846 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566498-pc964" event={"ID":"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d","Type":"ContainerDied","Data":"18b14d417d86bb1444abe37f82fd2d88b81fafc542a3b9491fd4c2419eda43db"} Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.771885 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18b14d417d86bb1444abe37f82fd2d88b81fafc542a3b9491fd4c2419eda43db" Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.771940 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:05 crc kubenswrapper[5136]: I0320 06:58:05.159917 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566492-9gbqz"] Mar 20 06:58:05 crc kubenswrapper[5136]: I0320 06:58:05.166348 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566492-9gbqz"] Mar 20 06:58:06 crc kubenswrapper[5136]: I0320 06:58:06.408344 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" path="/var/lib/kubelet/pods/760c854a-7b9d-4582-9bcc-faf077008e0f/volumes" Mar 20 06:58:15 crc kubenswrapper[5136]: I0320 06:58:15.821620 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:58:15 crc kubenswrapper[5136]: I0320 06:58:15.822366 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:58:15 crc kubenswrapper[5136]: I0320 06:58:15.822470 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:58:15 crc kubenswrapper[5136]: I0320 06:58:15.823368 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 06:58:15 crc kubenswrapper[5136]: I0320 06:58:15.823470 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13" gracePeriod=600 Mar 20 06:58:16 crc kubenswrapper[5136]: I0320 06:58:16.875118 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13" exitCode=0 Mar 20 06:58:16 crc kubenswrapper[5136]: I0320 06:58:16.875231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13"} Mar 20 06:58:16 crc kubenswrapper[5136]: I0320 06:58:16.875952 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a"} Mar 20 06:58:16 crc kubenswrapper[5136]: I0320 06:58:16.875988 5136 scope.go:117] "RemoveContainer" containerID="f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.151752 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj"] Mar 20 07:00:00 crc kubenswrapper[5136]: E0320 07:00:00.152730 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" containerName="oc" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.152750 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" containerName="oc" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.152928 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" containerName="oc" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.153452 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.155782 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.156095 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.188359 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566500-wd9ph"] Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.189380 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566500-wd9ph"] Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.189408 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj"] Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.189516 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.191510 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvcln\" (UniqueName: \"kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln\") pod \"auto-csr-approver-29566500-wd9ph\" (UID: \"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9\") " pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.192102 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.193104 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.193407 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.293043 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvcln\" (UniqueName: \"kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln\") pod \"auto-csr-approver-29566500-wd9ph\" (UID: \"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9\") " pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.293092 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.293147 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzcjx\" (UniqueName: \"kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.293176 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.310769 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvcln\" (UniqueName: \"kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln\") pod \"auto-csr-approver-29566500-wd9ph\" (UID: \"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9\") " pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.394485 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzcjx\" (UniqueName: \"kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.394547 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.394588 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.395594 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.398040 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.411088 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzcjx\" (UniqueName: \"kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.522230 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.537061 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.713583 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj"] Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.744564 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566500-wd9ph"] Mar 20 07:00:00 crc kubenswrapper[5136]: W0320 07:00:00.746330 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdbefeb1_6fcf_4868_a30e_9fc5a016daf9.slice/crio-6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92 WatchSource:0}: Error finding container 6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92: Status 404 returned error can't find the container with id 6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92 Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.748299 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:00:01 crc kubenswrapper[5136]: I0320 07:00:01.374059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" event={"ID":"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9","Type":"ContainerStarted","Data":"6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92"} Mar 20 07:00:01 crc kubenswrapper[5136]: I0320 07:00:01.375446 5136 generic.go:334] "Generic (PLEG): container finished" podID="cd400575-ef96-4721-b617-29c85991f7f0" containerID="3ae7890d536278f5580d52b91ca1ce94c8e1b0783ea4d154db2f9c059b03bba9" exitCode=0 Mar 20 07:00:01 crc kubenswrapper[5136]: I0320 07:00:01.375488 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" event={"ID":"cd400575-ef96-4721-b617-29c85991f7f0","Type":"ContainerDied","Data":"3ae7890d536278f5580d52b91ca1ce94c8e1b0783ea4d154db2f9c059b03bba9"} Mar 20 07:00:01 crc kubenswrapper[5136]: I0320 07:00:01.375518 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" event={"ID":"cd400575-ef96-4721-b617-29c85991f7f0","Type":"ContainerStarted","Data":"5bd48978a01a6a95902dab2c80c81f1c5d49c46cbbb607a25b21662394b4b67c"} Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.595078 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.744480 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume\") pod \"cd400575-ef96-4721-b617-29c85991f7f0\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.744540 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzcjx\" (UniqueName: \"kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx\") pod \"cd400575-ef96-4721-b617-29c85991f7f0\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.744594 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume\") pod \"cd400575-ef96-4721-b617-29c85991f7f0\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.745651 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume" (OuterVolumeSpecName: "config-volume") pod "cd400575-ef96-4721-b617-29c85991f7f0" (UID: "cd400575-ef96-4721-b617-29c85991f7f0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.749608 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx" (OuterVolumeSpecName: "kube-api-access-dzcjx") pod "cd400575-ef96-4721-b617-29c85991f7f0" (UID: "cd400575-ef96-4721-b617-29c85991f7f0"). InnerVolumeSpecName "kube-api-access-dzcjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.749825 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cd400575-ef96-4721-b617-29c85991f7f0" (UID: "cd400575-ef96-4721-b617-29c85991f7f0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.846099 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.846519 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzcjx\" (UniqueName: \"kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx\") on node \"crc\" DevicePath \"\"" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.846536 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:00:03 crc kubenswrapper[5136]: I0320 07:00:03.388348 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" event={"ID":"cd400575-ef96-4721-b617-29c85991f7f0","Type":"ContainerDied","Data":"5bd48978a01a6a95902dab2c80c81f1c5d49c46cbbb607a25b21662394b4b67c"} Mar 20 07:00:03 crc kubenswrapper[5136]: I0320 07:00:03.388391 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd48978a01a6a95902dab2c80c81f1c5d49c46cbbb607a25b21662394b4b67c" Mar 20 07:00:03 crc kubenswrapper[5136]: I0320 07:00:03.388391 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:20 crc kubenswrapper[5136]: I0320 07:00:20.490609 5136 generic.go:334] "Generic (PLEG): container finished" podID="bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" containerID="c700468779627f6961723a07d9133659d892564be897053e621e205bd14c1cbb" exitCode=0 Mar 20 07:00:20 crc kubenswrapper[5136]: I0320 07:00:20.490694 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" event={"ID":"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9","Type":"ContainerDied","Data":"c700468779627f6961723a07d9133659d892564be897053e621e205bd14c1cbb"} Mar 20 07:00:21 crc kubenswrapper[5136]: I0320 07:00:21.743433 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:21 crc kubenswrapper[5136]: I0320 07:00:21.866598 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvcln\" (UniqueName: \"kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln\") pod \"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9\" (UID: \"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9\") " Mar 20 07:00:21 crc kubenswrapper[5136]: I0320 07:00:21.874368 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln" (OuterVolumeSpecName: "kube-api-access-vvcln") pod "bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" (UID: "bdbefeb1-6fcf-4868-a30e-9fc5a016daf9"). InnerVolumeSpecName "kube-api-access-vvcln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:00:21 crc kubenswrapper[5136]: I0320 07:00:21.968589 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvcln\" (UniqueName: \"kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln\") on node \"crc\" DevicePath \"\"" Mar 20 07:00:22 crc kubenswrapper[5136]: I0320 07:00:22.512505 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" event={"ID":"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9","Type":"ContainerDied","Data":"6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92"} Mar 20 07:00:22 crc kubenswrapper[5136]: I0320 07:00:22.512874 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92" Mar 20 07:00:22 crc kubenswrapper[5136]: I0320 07:00:22.512967 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:22 crc kubenswrapper[5136]: I0320 07:00:22.811831 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566494-v7mrb"] Mar 20 07:00:22 crc kubenswrapper[5136]: I0320 07:00:22.815376 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566494-v7mrb"] Mar 20 07:00:24 crc kubenswrapper[5136]: I0320 07:00:24.405404 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793ba114-16f6-4ad2-bc47-daee6a819a00" path="/var/lib/kubelet/pods/793ba114-16f6-4ad2-bc47-daee6a819a00/volumes" Mar 20 07:00:28 crc kubenswrapper[5136]: I0320 07:00:28.683768 5136 scope.go:117] "RemoveContainer" containerID="bcfaf55d80db554feaeb3774c52e47f9f050ba6262ba6f06d4b21a8da6ad81d5" Mar 20 07:00:28 crc kubenswrapper[5136]: I0320 07:00:28.741003 5136 scope.go:117] "RemoveContainer" containerID="0fb2ad703d57143cc353d8d80b57c7e9e8375a9fe5053d2f01f256b52277bbbb" Mar 20 07:00:28 crc kubenswrapper[5136]: I0320 07:00:28.766564 5136 scope.go:117] "RemoveContainer" containerID="27efcdb323bdc3f56a43d4c3d542dd5fa1e9865775e2f1ad9ed6c1b623aed3e5" Mar 20 07:00:45 crc kubenswrapper[5136]: I0320 07:00:45.821777 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:00:45 crc kubenswrapper[5136]: I0320 07:00:45.822399 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:01:15 crc kubenswrapper[5136]: I0320 07:01:15.821661 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:01:15 crc kubenswrapper[5136]: I0320 07:01:15.824352 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:01:45 crc kubenswrapper[5136]: I0320 07:01:45.821565 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:01:45 crc kubenswrapper[5136]: I0320 07:01:45.822161 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:01:45 crc kubenswrapper[5136]: I0320 07:01:45.822222 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:01:45 crc kubenswrapper[5136]: I0320 07:01:45.823017 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:01:45 crc kubenswrapper[5136]: I0320 07:01:45.823109 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a" gracePeriod=600 Mar 20 07:01:46 crc kubenswrapper[5136]: I0320 07:01:46.049474 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a" exitCode=0 Mar 20 07:01:46 crc kubenswrapper[5136]: I0320 07:01:46.049498 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a"} Mar 20 07:01:46 crc kubenswrapper[5136]: I0320 07:01:46.049817 5136 scope.go:117] "RemoveContainer" containerID="75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13" Mar 20 07:01:47 crc kubenswrapper[5136]: I0320 07:01:47.059341 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52"} Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.150583 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566502-5gzjz"] Mar 20 07:02:00 crc kubenswrapper[5136]: E0320 07:02:00.151630 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" containerName="oc" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.151655 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" containerName="oc" Mar 20 07:02:00 crc kubenswrapper[5136]: E0320 07:02:00.151679 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd400575-ef96-4721-b617-29c85991f7f0" containerName="collect-profiles" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.151693 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd400575-ef96-4721-b617-29c85991f7f0" containerName="collect-profiles" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.151951 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" containerName="oc" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.151990 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd400575-ef96-4721-b617-29c85991f7f0" containerName="collect-profiles" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.152642 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.158484 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.158878 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.159672 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.170108 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566502-5gzjz"] Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.269531 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82mt\" (UniqueName: \"kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt\") pod \"auto-csr-approver-29566502-5gzjz\" (UID: \"a7239b4f-11f6-4f5c-8d78-c233e33b8a79\") " pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.371262 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r82mt\" (UniqueName: \"kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt\") pod \"auto-csr-approver-29566502-5gzjz\" (UID: \"a7239b4f-11f6-4f5c-8d78-c233e33b8a79\") " pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.393635 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82mt\" (UniqueName: \"kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt\") pod \"auto-csr-approver-29566502-5gzjz\" (UID: \"a7239b4f-11f6-4f5c-8d78-c233e33b8a79\") " pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.495145 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.914923 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566502-5gzjz"] Mar 20 07:02:01 crc kubenswrapper[5136]: I0320 07:02:01.152465 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" event={"ID":"a7239b4f-11f6-4f5c-8d78-c233e33b8a79","Type":"ContainerStarted","Data":"570656c1647fcfacbf8c2593e3a4ddcb92b1cbe326de3030885c15ece21a5150"} Mar 20 07:02:02 crc kubenswrapper[5136]: I0320 07:02:02.159644 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" event={"ID":"a7239b4f-11f6-4f5c-8d78-c233e33b8a79","Type":"ContainerStarted","Data":"ec7cb3f6c1f148e1e156127b9c7522e3ead66e4d7bc579e04406415291cd9efb"} Mar 20 07:02:02 crc kubenswrapper[5136]: I0320 07:02:02.170933 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" podStartSLOduration=1.352445003 podStartE2EDuration="2.170911939s" podCreationTimestamp="2026-03-20 07:02:00 +0000 UTC" firstStartedPulling="2026-03-20 07:02:00.923930629 +0000 UTC m=+753.183241790" lastFinishedPulling="2026-03-20 07:02:01.742397575 +0000 UTC m=+754.001708726" observedRunningTime="2026-03-20 07:02:02.170360241 +0000 UTC m=+754.429671392" watchObservedRunningTime="2026-03-20 07:02:02.170911939 +0000 UTC m=+754.430223110" Mar 20 07:02:03 crc kubenswrapper[5136]: I0320 07:02:03.170314 5136 generic.go:334] "Generic (PLEG): container finished" podID="a7239b4f-11f6-4f5c-8d78-c233e33b8a79" containerID="ec7cb3f6c1f148e1e156127b9c7522e3ead66e4d7bc579e04406415291cd9efb" exitCode=0 Mar 20 07:02:03 crc kubenswrapper[5136]: I0320 07:02:03.170439 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" event={"ID":"a7239b4f-11f6-4f5c-8d78-c233e33b8a79","Type":"ContainerDied","Data":"ec7cb3f6c1f148e1e156127b9c7522e3ead66e4d7bc579e04406415291cd9efb"} Mar 20 07:02:04 crc kubenswrapper[5136]: I0320 07:02:04.400543 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:04 crc kubenswrapper[5136]: I0320 07:02:04.563216 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r82mt\" (UniqueName: \"kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt\") pod \"a7239b4f-11f6-4f5c-8d78-c233e33b8a79\" (UID: \"a7239b4f-11f6-4f5c-8d78-c233e33b8a79\") " Mar 20 07:02:04 crc kubenswrapper[5136]: I0320 07:02:04.571905 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt" (OuterVolumeSpecName: "kube-api-access-r82mt") pod "a7239b4f-11f6-4f5c-8d78-c233e33b8a79" (UID: "a7239b4f-11f6-4f5c-8d78-c233e33b8a79"). InnerVolumeSpecName "kube-api-access-r82mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:02:04 crc kubenswrapper[5136]: I0320 07:02:04.665328 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r82mt\" (UniqueName: \"kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt\") on node \"crc\" DevicePath \"\"" Mar 20 07:02:05 crc kubenswrapper[5136]: I0320 07:02:05.186518 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" event={"ID":"a7239b4f-11f6-4f5c-8d78-c233e33b8a79","Type":"ContainerDied","Data":"570656c1647fcfacbf8c2593e3a4ddcb92b1cbe326de3030885c15ece21a5150"} Mar 20 07:02:05 crc kubenswrapper[5136]: I0320 07:02:05.186572 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="570656c1647fcfacbf8c2593e3a4ddcb92b1cbe326de3030885c15ece21a5150" Mar 20 07:02:05 crc kubenswrapper[5136]: I0320 07:02:05.186601 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:05 crc kubenswrapper[5136]: I0320 07:02:05.242741 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566496-kkbk6"] Mar 20 07:02:05 crc kubenswrapper[5136]: I0320 07:02:05.250685 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566496-kkbk6"] Mar 20 07:02:06 crc kubenswrapper[5136]: I0320 07:02:06.403090 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" path="/var/lib/kubelet/pods/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb/volumes" Mar 20 07:02:28 crc kubenswrapper[5136]: I0320 07:02:28.849251 5136 scope.go:117] "RemoveContainer" containerID="65e785eb1dbd67ac0fede3f7a5dc27c137ef39fb9832a3755e23a954eb908065" Mar 20 07:03:33 crc kubenswrapper[5136]: I0320 07:03:33.897916 5136 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.148137 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566504-fnsrq"] Mar 20 07:04:00 crc kubenswrapper[5136]: E0320 07:04:00.149055 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7239b4f-11f6-4f5c-8d78-c233e33b8a79" containerName="oc" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.149077 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7239b4f-11f6-4f5c-8d78-c233e33b8a79" containerName="oc" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.149243 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7239b4f-11f6-4f5c-8d78-c233e33b8a79" containerName="oc" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.149787 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.153184 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.154091 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.154331 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.157436 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566504-fnsrq"] Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.321052 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z69tx\" (UniqueName: \"kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx\") pod \"auto-csr-approver-29566504-fnsrq\" (UID: \"f8e1a6ad-3e5f-4a83-b429-d132710b8146\") " pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.422785 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z69tx\" (UniqueName: \"kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx\") pod \"auto-csr-approver-29566504-fnsrq\" (UID: \"f8e1a6ad-3e5f-4a83-b429-d132710b8146\") " pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.442588 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z69tx\" (UniqueName: \"kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx\") pod \"auto-csr-approver-29566504-fnsrq\" (UID: \"f8e1a6ad-3e5f-4a83-b429-d132710b8146\") " pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.475328 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.715236 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566504-fnsrq"] Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.486629 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" event={"ID":"f8e1a6ad-3e5f-4a83-b429-d132710b8146","Type":"ContainerStarted","Data":"f44bc06f78feac44286461e41ae87086d5e10b233578d1bb54fbda0fb313e9f8"} Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.834900 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.836361 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.852363 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.852438 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gprrc\" (UniqueName: \"kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.852466 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.866150 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.953738 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.953841 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gprrc\" (UniqueName: \"kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.953864 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.954287 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.954488 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.986167 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gprrc\" (UniqueName: \"kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.162130 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.365899 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.494535 5136 generic.go:334] "Generic (PLEG): container finished" podID="f8e1a6ad-3e5f-4a83-b429-d132710b8146" containerID="a9c6142c6c3be406a353a6109a8cb8b7b38a7799c67785c8207003ce9a223a42" exitCode=0 Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.494623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" event={"ID":"f8e1a6ad-3e5f-4a83-b429-d132710b8146","Type":"ContainerDied","Data":"a9c6142c6c3be406a353a6109a8cb8b7b38a7799c67785c8207003ce9a223a42"} Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.496597 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerStarted","Data":"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba"} Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.496646 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerStarted","Data":"eca769e7e0fbb0a2c72c3134d942fe624852d1d7e3b55c96cfb91d66352d23e1"} Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.504088 5136 generic.go:334] "Generic (PLEG): container finished" podID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerID="7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba" exitCode=0 Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.504228 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerDied","Data":"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba"} Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.761971 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.777880 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z69tx\" (UniqueName: \"kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx\") pod \"f8e1a6ad-3e5f-4a83-b429-d132710b8146\" (UID: \"f8e1a6ad-3e5f-4a83-b429-d132710b8146\") " Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.809334 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx" (OuterVolumeSpecName: "kube-api-access-z69tx") pod "f8e1a6ad-3e5f-4a83-b429-d132710b8146" (UID: "f8e1a6ad-3e5f-4a83-b429-d132710b8146"). InnerVolumeSpecName "kube-api-access-z69tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.879640 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z69tx\" (UniqueName: \"kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:04 crc kubenswrapper[5136]: I0320 07:04:04.516379 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" event={"ID":"f8e1a6ad-3e5f-4a83-b429-d132710b8146","Type":"ContainerDied","Data":"f44bc06f78feac44286461e41ae87086d5e10b233578d1bb54fbda0fb313e9f8"} Mar 20 07:04:04 crc kubenswrapper[5136]: I0320 07:04:04.516421 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f44bc06f78feac44286461e41ae87086d5e10b233578d1bb54fbda0fb313e9f8" Mar 20 07:04:04 crc kubenswrapper[5136]: I0320 07:04:04.516445 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:04 crc kubenswrapper[5136]: I0320 07:04:04.840174 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566498-pc964"] Mar 20 07:04:04 crc kubenswrapper[5136]: I0320 07:04:04.848273 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566498-pc964"] Mar 20 07:04:05 crc kubenswrapper[5136]: I0320 07:04:05.522441 5136 generic.go:334] "Generic (PLEG): container finished" podID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerID="3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748" exitCode=0 Mar 20 07:04:05 crc kubenswrapper[5136]: I0320 07:04:05.522489 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerDied","Data":"3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748"} Mar 20 07:04:06 crc kubenswrapper[5136]: I0320 07:04:06.405262 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" path="/var/lib/kubelet/pods/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d/volumes" Mar 20 07:04:06 crc kubenswrapper[5136]: I0320 07:04:06.530897 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerStarted","Data":"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3"} Mar 20 07:04:06 crc kubenswrapper[5136]: I0320 07:04:06.560728 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bhgm2" podStartSLOduration=3.006351796 podStartE2EDuration="5.560693955s" podCreationTimestamp="2026-03-20 07:04:01 +0000 UTC" firstStartedPulling="2026-03-20 07:04:03.506512162 +0000 UTC m=+875.765823313" lastFinishedPulling="2026-03-20 07:04:06.060854321 +0000 UTC m=+878.320165472" observedRunningTime="2026-03-20 07:04:06.5541611 +0000 UTC m=+878.813472291" watchObservedRunningTime="2026-03-20 07:04:06.560693955 +0000 UTC m=+878.820005166" Mar 20 07:04:12 crc kubenswrapper[5136]: I0320 07:04:12.163075 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:12 crc kubenswrapper[5136]: I0320 07:04:12.164760 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:13 crc kubenswrapper[5136]: I0320 07:04:13.204054 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bhgm2" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="registry-server" probeResult="failure" output=< Mar 20 07:04:13 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:04:13 crc kubenswrapper[5136]: > Mar 20 07:04:15 crc kubenswrapper[5136]: I0320 07:04:15.821490 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:04:15 crc kubenswrapper[5136]: I0320 07:04:15.821551 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:04:22 crc kubenswrapper[5136]: I0320 07:04:22.232603 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:22 crc kubenswrapper[5136]: I0320 07:04:22.274680 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:22 crc kubenswrapper[5136]: I0320 07:04:22.471459 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:23 crc kubenswrapper[5136]: I0320 07:04:23.627867 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bhgm2" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="registry-server" containerID="cri-o://f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3" gracePeriod=2 Mar 20 07:04:23 crc kubenswrapper[5136]: I0320 07:04:23.961120 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.052270 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities\") pod \"c199c8cd-de7b-4743-9ce7-786a33ff47da\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.052393 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gprrc\" (UniqueName: \"kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc\") pod \"c199c8cd-de7b-4743-9ce7-786a33ff47da\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.052456 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content\") pod \"c199c8cd-de7b-4743-9ce7-786a33ff47da\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.053169 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities" (OuterVolumeSpecName: "utilities") pod "c199c8cd-de7b-4743-9ce7-786a33ff47da" (UID: "c199c8cd-de7b-4743-9ce7-786a33ff47da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.063039 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc" (OuterVolumeSpecName: "kube-api-access-gprrc") pod "c199c8cd-de7b-4743-9ce7-786a33ff47da" (UID: "c199c8cd-de7b-4743-9ce7-786a33ff47da"). InnerVolumeSpecName "kube-api-access-gprrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.154344 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.154387 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gprrc\" (UniqueName: \"kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.211323 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c199c8cd-de7b-4743-9ce7-786a33ff47da" (UID: "c199c8cd-de7b-4743-9ce7-786a33ff47da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.255723 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.634496 5136 generic.go:334] "Generic (PLEG): container finished" podID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerID="f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3" exitCode=0 Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.634535 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.634544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerDied","Data":"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3"} Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.634614 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerDied","Data":"eca769e7e0fbb0a2c72c3134d942fe624852d1d7e3b55c96cfb91d66352d23e1"} Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.634641 5136 scope.go:117] "RemoveContainer" containerID="f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.656533 5136 scope.go:117] "RemoveContainer" containerID="3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.657611 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.669325 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.682531 5136 scope.go:117] "RemoveContainer" containerID="7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.699505 5136 scope.go:117] "RemoveContainer" containerID="f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3" Mar 20 07:04:24 crc kubenswrapper[5136]: E0320 07:04:24.700017 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3\": container with ID starting with f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3 not found: ID does not exist" containerID="f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.700083 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3"} err="failed to get container status \"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3\": rpc error: code = NotFound desc = could not find container \"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3\": container with ID starting with f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3 not found: ID does not exist" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.700118 5136 scope.go:117] "RemoveContainer" containerID="3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748" Mar 20 07:04:24 crc kubenswrapper[5136]: E0320 07:04:24.700523 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748\": container with ID starting with 3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748 not found: ID does not exist" containerID="3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.700559 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748"} err="failed to get container status \"3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748\": rpc error: code = NotFound desc = could not find container \"3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748\": container with ID starting with 3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748 not found: ID does not exist" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.700602 5136 scope.go:117] "RemoveContainer" containerID="7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba" Mar 20 07:04:24 crc kubenswrapper[5136]: E0320 07:04:24.700961 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba\": container with ID starting with 7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba not found: ID does not exist" containerID="7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.701001 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba"} err="failed to get container status \"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba\": rpc error: code = NotFound desc = could not find container \"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba\": container with ID starting with 7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba not found: ID does not exist" Mar 20 07:04:26 crc kubenswrapper[5136]: I0320 07:04:26.407282 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" path="/var/lib/kubelet/pods/c199c8cd-de7b-4743-9ce7-786a33ff47da/volumes" Mar 20 07:04:28 crc kubenswrapper[5136]: I0320 07:04:28.929642 5136 scope.go:117] "RemoveContainer" containerID="c00038ddb8710afcc46ffbe4488fc8393e21c46c51a56a16f3faef658211be51" Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.683185 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbmbh"] Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684460 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-controller" containerID="cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684626 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="northd" containerID="cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684748 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-acl-logging" containerID="cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684684 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="nbdb" containerID="cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684644 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="sbdb" containerID="cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684864 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-node" containerID="cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684630 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.733027 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" containerID="cri-o://ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" gracePeriod=30 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.018021 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/3.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.020953 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovn-acl-logging/0.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.022115 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovn-controller/0.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.022847 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072654 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mpvnm"] Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072856 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="extract-content" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072871 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="extract-content" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072881 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kubecfg-setup" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072888 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kubecfg-setup" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072898 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072904 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072911 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072916 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072923 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="sbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072929 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="sbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072940 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072946 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072953 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="registry-server" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072959 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="registry-server" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072966 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072972 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072979 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072986 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072994 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="northd" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073000 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="northd" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073008 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-node" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073014 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-node" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073021 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073027 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073035 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e1a6ad-3e5f-4a83-b429-d132710b8146" containerName="oc" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073041 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e1a6ad-3e5f-4a83-b429-d132710b8146" containerName="oc" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073049 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-acl-logging" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073054 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-acl-logging" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073062 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073067 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073076 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="extract-utilities" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073083 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="extract-utilities" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073091 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="nbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073096 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="nbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073177 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073186 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e1a6ad-3e5f-4a83-b429-d132710b8146" containerName="oc" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073194 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="nbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073204 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073212 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073220 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-node" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073226 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="northd" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073232 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073239 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="registry-server" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073244 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="sbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073251 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073257 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-acl-logging" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073264 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073412 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.074830 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125339 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125397 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125434 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125463 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125504 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125502 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125513 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125532 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125579 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125586 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125610 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125625 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125632 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125650 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125658 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125676 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125687 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125704 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125728 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125750 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125781 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125806 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125873 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrnqr\" (UniqueName: \"kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125892 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125931 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125953 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125981 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126241 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126288 5136 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126299 5136 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126309 5136 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126318 5136 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126327 5136 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126337 5136 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126346 5136 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126354 5136 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126482 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126533 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126775 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126830 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log" (OuterVolumeSpecName: "node-log") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126839 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash" (OuterVolumeSpecName: "host-slash") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126861 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126929 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket" (OuterVolumeSpecName: "log-socket") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126870 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.132317 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.132355 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr" (OuterVolumeSpecName: "kube-api-access-nrnqr") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "kube-api-access-nrnqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.138979 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227040 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-node-log\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227110 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-systemd-units\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227258 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-kubelet\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227328 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-script-lib\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227449 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-bin\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227501 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-etc-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227539 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-var-lib-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-log-socket\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228158 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-systemd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228304 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-env-overrides\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-netns\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228484 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-config\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228556 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-netd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228614 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228711 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228806 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-ovn\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228896 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-slash\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228917 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228940 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/535b87fd-9e45-4845-8569-975e6c108579-ovn-node-metrics-cert\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228964 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2c7k\" (UniqueName: \"kubernetes.io/projected/535b87fd-9e45-4845-8569-975e6c108579-kube-api-access-r2c7k\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229120 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrnqr\" (UniqueName: \"kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229143 5136 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229157 5136 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229169 5136 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229182 5136 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229195 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229209 5136 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229221 5136 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229232 5136 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229243 5136 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229254 5136 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229265 5136 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330130 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-node-log\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330197 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-systemd-units\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330236 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-kubelet\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330248 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-node-log\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330264 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-script-lib\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-systemd-units\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330325 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-bin\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-etc-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330371 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-var-lib-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330398 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-log-socket\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330409 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-etc-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330399 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-kubelet\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330423 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-systemd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330483 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-bin\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330516 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-var-lib-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330540 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-env-overrides\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330574 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-systemd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330661 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-log-socket\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330744 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-netns\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330864 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-config\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330902 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-netd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330905 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-netns\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330953 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-netd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330961 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331001 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331013 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331057 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-ovn\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331116 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331119 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/535b87fd-9e45-4845-8569-975e6c108579-ovn-node-metrics-cert\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331155 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-slash\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331169 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331186 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2c7k\" (UniqueName: \"kubernetes.io/projected/535b87fd-9e45-4845-8569-975e6c108579-kube-api-access-r2c7k\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331189 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-env-overrides\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331193 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-ovn\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331320 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331344 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-slash\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331489 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-script-lib\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331975 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-config\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.335293 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/535b87fd-9e45-4845-8569-975e6c108579-ovn-node-metrics-cert\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.356854 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2c7k\" (UniqueName: \"kubernetes.io/projected/535b87fd-9e45-4845-8569-975e6c108579-kube-api-access-r2c7k\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.392968 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.710968 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/3.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.714552 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovn-acl-logging/0.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715229 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovn-controller/0.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715647 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715675 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715688 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715699 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715710 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715719 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715728 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" exitCode=143 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715737 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" exitCode=143 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715724 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715794 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715829 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715847 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715906 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715849 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717664 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717728 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717747 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717761 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717773 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717785 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717796 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717807 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717847 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717859 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717876 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717894 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717995 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718009 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718021 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718033 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718045 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718058 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718069 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718081 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718092 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718110 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718129 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718143 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718155 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718168 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718183 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718198 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718212 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718228 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718243 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718258 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718278 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"fc8a676d87b2c6b9273e55ddfc0af4b456dbdcc2adee4a1bbfceebb87273789e"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718302 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718321 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718336 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718353 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718364 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718376 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718387 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718398 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718409 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718420 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.722537 5136 generic.go:334] "Generic (PLEG): container finished" podID="535b87fd-9e45-4845-8569-975e6c108579" containerID="5bf0954043fc96f805117862c6e3ed58f957eef384f93e52da58e461a7705fc5" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.722636 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerDied","Data":"5bf0954043fc96f805117862c6e3ed58f957eef384f93e52da58e461a7705fc5"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.722668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"79a26dfd433adb18868a093558a28312911f88ed08cdf49137a074913d6b45ce"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.726925 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/2.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.728031 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/1.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.728081 5136 generic.go:334] "Generic (PLEG): container finished" podID="263c5427-a835-40c6-93cb-4bb66a83ea5b" containerID="758a96f3880280b2cc4897f196524e1c9a081a903d0afe658e33991167460924" exitCode=2 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.728140 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerDied","Data":"758a96f3880280b2cc4897f196524e1c9a081a903d0afe658e33991167460924"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.728205 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.729070 5136 scope.go:117] "RemoveContainer" containerID="758a96f3880280b2cc4897f196524e1c9a081a903d0afe658e33991167460924" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.743782 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbmbh"] Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.751708 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbmbh"] Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.754782 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.786173 5136 scope.go:117] "RemoveContainer" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.843916 5136 scope.go:117] "RemoveContainer" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.866195 5136 scope.go:117] "RemoveContainer" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.882427 5136 scope.go:117] "RemoveContainer" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.894984 5136 scope.go:117] "RemoveContainer" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.906289 5136 scope.go:117] "RemoveContainer" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.923696 5136 scope.go:117] "RemoveContainer" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.949747 5136 scope.go:117] "RemoveContainer" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.984274 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.984619 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.984648 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} err="failed to get container status \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.984669 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.984853 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": container with ID starting with 07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c not found: ID does not exist" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.984875 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} err="failed to get container status \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": rpc error: code = NotFound desc = could not find container \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": container with ID starting with 07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.984888 5136 scope.go:117] "RemoveContainer" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.985043 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": container with ID starting with 2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383 not found: ID does not exist" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985060 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} err="failed to get container status \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": rpc error: code = NotFound desc = could not find container \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": container with ID starting with 2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985073 5136 scope.go:117] "RemoveContainer" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.985394 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": container with ID starting with f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568 not found: ID does not exist" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985429 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} err="failed to get container status \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": rpc error: code = NotFound desc = could not find container \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": container with ID starting with f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985453 5136 scope.go:117] "RemoveContainer" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.985674 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": container with ID starting with 84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200 not found: ID does not exist" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985693 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} err="failed to get container status \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": rpc error: code = NotFound desc = could not find container \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": container with ID starting with 84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985706 5136 scope.go:117] "RemoveContainer" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.986036 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": container with ID starting with a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566 not found: ID does not exist" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986115 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} err="failed to get container status \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": rpc error: code = NotFound desc = could not find container \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": container with ID starting with a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986177 5136 scope.go:117] "RemoveContainer" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.986448 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": container with ID starting with 0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4 not found: ID does not exist" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986469 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} err="failed to get container status \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": rpc error: code = NotFound desc = could not find container \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": container with ID starting with 0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986482 5136 scope.go:117] "RemoveContainer" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.986684 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": container with ID starting with 09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c not found: ID does not exist" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986706 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} err="failed to get container status \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": rpc error: code = NotFound desc = could not find container \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": container with ID starting with 09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986721 5136 scope.go:117] "RemoveContainer" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.986999 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": container with ID starting with 47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3 not found: ID does not exist" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987020 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} err="failed to get container status \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": rpc error: code = NotFound desc = could not find container \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": container with ID starting with 47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987034 5136 scope.go:117] "RemoveContainer" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.987439 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": container with ID starting with 0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983 not found: ID does not exist" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987500 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} err="failed to get container status \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": rpc error: code = NotFound desc = could not find container \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": container with ID starting with 0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987557 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987889 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} err="failed to get container status \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987930 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.988617 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} err="failed to get container status \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": rpc error: code = NotFound desc = could not find container \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": container with ID starting with 07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.988673 5136 scope.go:117] "RemoveContainer" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989011 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} err="failed to get container status \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": rpc error: code = NotFound desc = could not find container \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": container with ID starting with 2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989029 5136 scope.go:117] "RemoveContainer" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989322 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} err="failed to get container status \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": rpc error: code = NotFound desc = could not find container \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": container with ID starting with f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989354 5136 scope.go:117] "RemoveContainer" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989698 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} err="failed to get container status \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": rpc error: code = NotFound desc = could not find container \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": container with ID starting with 84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989718 5136 scope.go:117] "RemoveContainer" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990074 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} err="failed to get container status \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": rpc error: code = NotFound desc = could not find container \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": container with ID starting with a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990096 5136 scope.go:117] "RemoveContainer" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990469 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} err="failed to get container status \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": rpc error: code = NotFound desc = could not find container \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": container with ID starting with 0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990487 5136 scope.go:117] "RemoveContainer" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990700 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} err="failed to get container status \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": rpc error: code = NotFound desc = could not find container \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": container with ID starting with 09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990722 5136 scope.go:117] "RemoveContainer" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991052 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} err="failed to get container status \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": rpc error: code = NotFound desc = could not find container \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": container with ID starting with 47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991113 5136 scope.go:117] "RemoveContainer" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991415 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} err="failed to get container status \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": rpc error: code = NotFound desc = could not find container \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": container with ID starting with 0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991440 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991758 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} err="failed to get container status \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991803 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992104 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} err="failed to get container status \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": rpc error: code = NotFound desc = could not find container \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": container with ID starting with 07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992126 5136 scope.go:117] "RemoveContainer" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992361 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} err="failed to get container status \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": rpc error: code = NotFound desc = could not find container \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": container with ID starting with 2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992378 5136 scope.go:117] "RemoveContainer" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992651 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} err="failed to get container status \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": rpc error: code = NotFound desc = could not find container \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": container with ID starting with f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992668 5136 scope.go:117] "RemoveContainer" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992927 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} err="failed to get container status \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": rpc error: code = NotFound desc = could not find container \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": container with ID starting with 84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992944 5136 scope.go:117] "RemoveContainer" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.993404 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} err="failed to get container status \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": rpc error: code = NotFound desc = could not find container \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": container with ID starting with a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.993446 5136 scope.go:117] "RemoveContainer" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.993706 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} err="failed to get container status \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": rpc error: code = NotFound desc = could not find container \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": container with ID starting with 0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.993728 5136 scope.go:117] "RemoveContainer" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994087 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} err="failed to get container status \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": rpc error: code = NotFound desc = could not find container \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": container with ID starting with 09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994141 5136 scope.go:117] "RemoveContainer" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994372 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} err="failed to get container status \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": rpc error: code = NotFound desc = could not find container \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": container with ID starting with 47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994410 5136 scope.go:117] "RemoveContainer" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994701 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} err="failed to get container status \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": rpc error: code = NotFound desc = could not find container \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": container with ID starting with 0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994756 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995115 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} err="failed to get container status \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995184 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995470 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} err="failed to get container status \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": rpc error: code = NotFound desc = could not find container \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": container with ID starting with 07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995519 5136 scope.go:117] "RemoveContainer" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995791 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} err="failed to get container status \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": rpc error: code = NotFound desc = could not find container \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": container with ID starting with 2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995821 5136 scope.go:117] "RemoveContainer" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996078 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} err="failed to get container status \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": rpc error: code = NotFound desc = could not find container \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": container with ID starting with f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996115 5136 scope.go:117] "RemoveContainer" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996429 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} err="failed to get container status \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": rpc error: code = NotFound desc = could not find container \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": container with ID starting with 84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996449 5136 scope.go:117] "RemoveContainer" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996689 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} err="failed to get container status \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": rpc error: code = NotFound desc = could not find container \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": container with ID starting with a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996707 5136 scope.go:117] "RemoveContainer" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.997049 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} err="failed to get container status \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": rpc error: code = NotFound desc = could not find container \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": container with ID starting with 0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.997086 5136 scope.go:117] "RemoveContainer" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.998232 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} err="failed to get container status \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": rpc error: code = NotFound desc = could not find container \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": container with ID starting with 09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.998255 5136 scope.go:117] "RemoveContainer" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.998492 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} err="failed to get container status \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": rpc error: code = NotFound desc = could not find container \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": container with ID starting with 47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.998601 5136 scope.go:117] "RemoveContainer" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.999020 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} err="failed to get container status \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": rpc error: code = NotFound desc = could not find container \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": container with ID starting with 0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.999055 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.999486 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} err="failed to get container status \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.734203 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/2.log" Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.734928 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/1.log" Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.735005 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerStarted","Data":"a98c4a1acebc55db9d4281f82e81128e4f642fe57246d764a74b3c3bee296982"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738565 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"bcf75fb70207c68b87c45ca1e4e718f8049241692381a392a32408d2e09d38ee"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738602 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"bff01ad626cc75505ef638c801e2acae3c5169f5760b75211664e8a2dc2b714b"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738616 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"c773d488f1a80fa882be81f2603f998a7ee455721f6a800ef1da1ba448cc33c0"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738628 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"d8c7380730bdf91a603abaa0f907b7b002cc000a943e8b0c0b24a23a892c717d"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738638 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"122192167ce6e33ae405c6e26b212239444d03ced803bc78ecd58b02d2894953"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738646 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"27f43d04b6ab7cc968bd5f9c09ac0ab7e295f2f4ac9a2f462a8dbdffc3c7cf0f"} Mar 20 07:04:36 crc kubenswrapper[5136]: I0320 07:04:36.404217 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" path="/var/lib/kubelet/pods/963bf1ca-b871-4cad-a1fc-cf829a70a81a/volumes" Mar 20 07:04:38 crc kubenswrapper[5136]: I0320 07:04:38.761702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"85404a2f32750d0e85f9ee96e92a26a710ef171ee369d492d225748b50b6e11c"} Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.777033 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"bdaab549146e0ca42d9039984583fe4ce4d2a58cdd7fb324193ab608ff7351d2"} Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.777658 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.777684 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.777701 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.807139 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" podStartSLOduration=6.80711663 podStartE2EDuration="6.80711663s" podCreationTimestamp="2026-03-20 07:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:04:40.805162469 +0000 UTC m=+913.064473630" watchObservedRunningTime="2026-03-20 07:04:40.80711663 +0000 UTC m=+913.066427791" Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.809724 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.813943 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.268698 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-jqplr"] Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.269993 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.271942 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.271959 5136 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-65jln" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.272140 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.272141 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.280328 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jqplr"] Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.353754 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.353799 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shp6s\" (UniqueName: \"kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.353847 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.454840 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.454910 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shp6s\" (UniqueName: \"kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.454962 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.455255 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.456468 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.472499 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shp6s\" (UniqueName: \"kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.588589 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.626256 5136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(226a66fe9628635433b4f4b55ef256408541fc48c6791a51cfb7623df2a3a600): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.626454 5136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(226a66fe9628635433b4f4b55ef256408541fc48c6791a51cfb7623df2a3a600): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.626586 5136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(226a66fe9628635433b4f4b55ef256408541fc48c6791a51cfb7623df2a3a600): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.626730 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-jqplr_crc-storage(868b5502-6c3e-4e3b-bc43-c0875e71512f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-jqplr_crc-storage(868b5502-6c3e-4e3b-bc43-c0875e71512f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(226a66fe9628635433b4f4b55ef256408541fc48c6791a51cfb7623df2a3a600): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-jqplr" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.795165 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.795779 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.819464 5136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(e9e46ce0c63674d6f5f1d145700c5ff2dfbf3e102108a3dde3acea23e3ddd701): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.819554 5136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(e9e46ce0c63674d6f5f1d145700c5ff2dfbf3e102108a3dde3acea23e3ddd701): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.819625 5136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(e9e46ce0c63674d6f5f1d145700c5ff2dfbf3e102108a3dde3acea23e3ddd701): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.819746 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-jqplr_crc-storage(868b5502-6c3e-4e3b-bc43-c0875e71512f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-jqplr_crc-storage(868b5502-6c3e-4e3b-bc43-c0875e71512f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(e9e46ce0c63674d6f5f1d145700c5ff2dfbf3e102108a3dde3acea23e3ddd701): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-jqplr" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" Mar 20 07:04:45 crc kubenswrapper[5136]: I0320 07:04:45.822133 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:04:45 crc kubenswrapper[5136]: I0320 07:04:45.822963 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:04:56 crc kubenswrapper[5136]: I0320 07:04:56.396578 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:56 crc kubenswrapper[5136]: I0320 07:04:56.397559 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:56 crc kubenswrapper[5136]: I0320 07:04:56.596158 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jqplr"] Mar 20 07:04:56 crc kubenswrapper[5136]: W0320 07:04:56.601398 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod868b5502_6c3e_4e3b_bc43_c0875e71512f.slice/crio-f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658 WatchSource:0}: Error finding container f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658: Status 404 returned error can't find the container with id f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658 Mar 20 07:04:56 crc kubenswrapper[5136]: I0320 07:04:56.878954 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jqplr" event={"ID":"868b5502-6c3e-4e3b-bc43-c0875e71512f","Type":"ContainerStarted","Data":"f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658"} Mar 20 07:04:58 crc kubenswrapper[5136]: I0320 07:04:58.894154 5136 generic.go:334] "Generic (PLEG): container finished" podID="868b5502-6c3e-4e3b-bc43-c0875e71512f" containerID="1634bfed9d3426f391a9ba220363e60d18b7a13e0b5dd7787df7f812b3c4e0ea" exitCode=0 Mar 20 07:04:58 crc kubenswrapper[5136]: I0320 07:04:58.894246 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jqplr" event={"ID":"868b5502-6c3e-4e3b-bc43-c0875e71512f","Type":"ContainerDied","Data":"1634bfed9d3426f391a9ba220363e60d18b7a13e0b5dd7787df7f812b3c4e0ea"} Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.222158 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.283389 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shp6s\" (UniqueName: \"kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s\") pod \"868b5502-6c3e-4e3b-bc43-c0875e71512f\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.283434 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt\") pod \"868b5502-6c3e-4e3b-bc43-c0875e71512f\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.283464 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage\") pod \"868b5502-6c3e-4e3b-bc43-c0875e71512f\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.283569 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "868b5502-6c3e-4e3b-bc43-c0875e71512f" (UID: "868b5502-6c3e-4e3b-bc43-c0875e71512f"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.289643 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s" (OuterVolumeSpecName: "kube-api-access-shp6s") pod "868b5502-6c3e-4e3b-bc43-c0875e71512f" (UID: "868b5502-6c3e-4e3b-bc43-c0875e71512f"). InnerVolumeSpecName "kube-api-access-shp6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.296428 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "868b5502-6c3e-4e3b-bc43-c0875e71512f" (UID: "868b5502-6c3e-4e3b-bc43-c0875e71512f"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.385116 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shp6s\" (UniqueName: \"kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.385159 5136 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.385173 5136 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.909286 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jqplr" event={"ID":"868b5502-6c3e-4e3b-bc43-c0875e71512f","Type":"ContainerDied","Data":"f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658"} Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.909331 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.910624 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:05:04 crc kubenswrapper[5136]: I0320 07:05:04.431421 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.541935 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj"] Mar 20 07:05:08 crc kubenswrapper[5136]: E0320 07:05:08.542480 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" containerName="storage" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.542495 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" containerName="storage" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.542618 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" containerName="storage" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.543508 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.549456 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.563711 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj"] Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.627966 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.628032 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.628087 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmtk\" (UniqueName: \"kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.729430 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfmtk\" (UniqueName: \"kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.729581 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.729629 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.730036 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.730097 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.747519 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfmtk\" (UniqueName: \"kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.862209 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:09 crc kubenswrapper[5136]: I0320 07:05:09.053872 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj"] Mar 20 07:05:09 crc kubenswrapper[5136]: I0320 07:05:09.978304 5136 generic.go:334] "Generic (PLEG): container finished" podID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerID="428e31f96cc381a34b7745bdf5482cfd780f7df9c2bdb635cbc9f4da3377165b" exitCode=0 Mar 20 07:05:09 crc kubenswrapper[5136]: I0320 07:05:09.979317 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" event={"ID":"3cef4dfa-acd1-43f2-adaa-3af5f28046f9","Type":"ContainerDied","Data":"428e31f96cc381a34b7745bdf5482cfd780f7df9c2bdb635cbc9f4da3377165b"} Mar 20 07:05:09 crc kubenswrapper[5136]: I0320 07:05:09.979345 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" event={"ID":"3cef4dfa-acd1-43f2-adaa-3af5f28046f9","Type":"ContainerStarted","Data":"89082b0913ce4f22585d5f49a80b816f5fde9425995a9cacb4fd7966f3c48e3b"} Mar 20 07:05:09 crc kubenswrapper[5136]: I0320 07:05:09.980530 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:05:11 crc kubenswrapper[5136]: I0320 07:05:11.995008 5136 generic.go:334] "Generic (PLEG): container finished" podID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerID="6e15132358ac7794cb0ef54b2c6349bb1c58f1888329abd0c0e15ab7c771b98a" exitCode=0 Mar 20 07:05:11 crc kubenswrapper[5136]: I0320 07:05:11.995099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" event={"ID":"3cef4dfa-acd1-43f2-adaa-3af5f28046f9","Type":"ContainerDied","Data":"6e15132358ac7794cb0ef54b2c6349bb1c58f1888329abd0c0e15ab7c771b98a"} Mar 20 07:05:13 crc kubenswrapper[5136]: I0320 07:05:13.005367 5136 generic.go:334] "Generic (PLEG): container finished" podID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerID="b5b6ddf7cefec18aee7e2f69a837e4eb2c5c360df1caf1a8b0df1d45126e5b2d" exitCode=0 Mar 20 07:05:13 crc kubenswrapper[5136]: I0320 07:05:13.005468 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" event={"ID":"3cef4dfa-acd1-43f2-adaa-3af5f28046f9","Type":"ContainerDied","Data":"b5b6ddf7cefec18aee7e2f69a837e4eb2c5c360df1caf1a8b0df1d45126e5b2d"} Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.334175 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.403229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfmtk\" (UniqueName: \"kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk\") pod \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.403582 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util\") pod \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.403635 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle\") pod \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.405663 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle" (OuterVolumeSpecName: "bundle") pod "3cef4dfa-acd1-43f2-adaa-3af5f28046f9" (UID: "3cef4dfa-acd1-43f2-adaa-3af5f28046f9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.413402 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk" (OuterVolumeSpecName: "kube-api-access-mfmtk") pod "3cef4dfa-acd1-43f2-adaa-3af5f28046f9" (UID: "3cef4dfa-acd1-43f2-adaa-3af5f28046f9"). InnerVolumeSpecName "kube-api-access-mfmtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.418659 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util" (OuterVolumeSpecName: "util") pod "3cef4dfa-acd1-43f2-adaa-3af5f28046f9" (UID: "3cef4dfa-acd1-43f2-adaa-3af5f28046f9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.505497 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfmtk\" (UniqueName: \"kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.505547 5136 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.505566 5136 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.018531 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" event={"ID":"3cef4dfa-acd1-43f2-adaa-3af5f28046f9","Type":"ContainerDied","Data":"89082b0913ce4f22585d5f49a80b816f5fde9425995a9cacb4fd7966f3c48e3b"} Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.018569 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89082b0913ce4f22585d5f49a80b816f5fde9425995a9cacb4fd7966f3c48e3b" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.018666 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.822418 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.822499 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.822559 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.823614 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.823713 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52" gracePeriod=600 Mar 20 07:05:16 crc kubenswrapper[5136]: I0320 07:05:16.030506 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52" exitCode=0 Mar 20 07:05:16 crc kubenswrapper[5136]: I0320 07:05:16.030585 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52"} Mar 20 07:05:16 crc kubenswrapper[5136]: I0320 07:05:16.031155 5136 scope.go:117] "RemoveContainer" containerID="1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a" Mar 20 07:05:17 crc kubenswrapper[5136]: I0320 07:05:17.039483 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f"} Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.073535 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-mzffz"] Mar 20 07:05:18 crc kubenswrapper[5136]: E0320 07:05:18.074105 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="util" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.074122 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="util" Mar 20 07:05:18 crc kubenswrapper[5136]: E0320 07:05:18.074132 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="pull" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.074142 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="pull" Mar 20 07:05:18 crc kubenswrapper[5136]: E0320 07:05:18.074169 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="extract" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.074178 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="extract" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.074300 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="extract" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.074742 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.076392 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8wbjw" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.076776 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.076976 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.117784 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-mzffz"] Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.169575 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzklz\" (UniqueName: \"kubernetes.io/projected/94018849-bf2a-47b4-be05-5e9ff0e0dfbd-kube-api-access-xzklz\") pod \"nmstate-operator-796d4cfff4-mzffz\" (UID: \"94018849-bf2a-47b4-be05-5e9ff0e0dfbd\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.271040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzklz\" (UniqueName: \"kubernetes.io/projected/94018849-bf2a-47b4-be05-5e9ff0e0dfbd-kube-api-access-xzklz\") pod \"nmstate-operator-796d4cfff4-mzffz\" (UID: \"94018849-bf2a-47b4-be05-5e9ff0e0dfbd\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.288634 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzklz\" (UniqueName: \"kubernetes.io/projected/94018849-bf2a-47b4-be05-5e9ff0e0dfbd-kube-api-access-xzklz\") pod \"nmstate-operator-796d4cfff4-mzffz\" (UID: \"94018849-bf2a-47b4-be05-5e9ff0e0dfbd\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.389254 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.582651 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-mzffz"] Mar 20 07:05:19 crc kubenswrapper[5136]: I0320 07:05:19.049412 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" event={"ID":"94018849-bf2a-47b4-be05-5e9ff0e0dfbd","Type":"ContainerStarted","Data":"76997a707fd2f10632b8dd5bb0ae9f9ccb9785781e7ea26639f8901c3e69b18f"} Mar 20 07:05:21 crc kubenswrapper[5136]: I0320 07:05:21.064330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" event={"ID":"94018849-bf2a-47b4-be05-5e9ff0e0dfbd","Type":"ContainerStarted","Data":"4071e7ab7dc9dc2e18df122edb9d6a3baae7ce2f6b36ad43a35a7aea2e94d2a0"} Mar 20 07:05:21 crc kubenswrapper[5136]: I0320 07:05:21.091379 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" podStartSLOduration=1.124671199 podStartE2EDuration="3.091353008s" podCreationTimestamp="2026-03-20 07:05:18 +0000 UTC" firstStartedPulling="2026-03-20 07:05:18.587176765 +0000 UTC m=+950.846487906" lastFinishedPulling="2026-03-20 07:05:20.553858554 +0000 UTC m=+952.813169715" observedRunningTime="2026-03-20 07:05:21.084453511 +0000 UTC m=+953.343764662" watchObservedRunningTime="2026-03-20 07:05:21.091353008 +0000 UTC m=+953.350664199" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.304107 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.305623 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.325303 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.375046 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.375091 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldh9\" (UniqueName: \"kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.375126 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.476181 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.476330 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.476349 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldh9\" (UniqueName: \"kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.476669 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.477189 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.497943 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldh9\" (UniqueName: \"kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.621850 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.885801 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:27 crc kubenswrapper[5136]: I0320 07:05:27.100946 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerStarted","Data":"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea"} Mar 20 07:05:27 crc kubenswrapper[5136]: I0320 07:05:27.100997 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerStarted","Data":"3ac1ac65683420394fa986da7d4411ae731402826717eacf8969f7dd09435281"} Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.110496 5136 generic.go:334] "Generic (PLEG): container finished" podID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerID="bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea" exitCode=0 Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.110583 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerDied","Data":"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea"} Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.806681 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.807718 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.817027 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-j7mkp" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.831975 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.848961 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-k7799"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.849770 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.853295 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.880923 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-k7799"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.900240 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7bqsc"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.901136 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.919507 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwdvp\" (UniqueName: \"kubernetes.io/projected/21fd222d-3101-4c49-bbca-611916a57ae8-kube-api-access-fwdvp\") pod \"nmstate-metrics-9b8c8685d-dxl94\" (UID: \"21fd222d-3101-4c49-bbca-611916a57ae8\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.919587 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bnn\" (UniqueName: \"kubernetes.io/projected/a6f3f958-ebef-4d11-be1e-1cd2d431006c-kube-api-access-q8bnn\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.919634 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a6f3f958-ebef-4d11-be1e-1cd2d431006c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.994389 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.999203 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.000773 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf"] Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.004543 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.004705 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.005848 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nb66b" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.006687 5136 scope.go:117] "RemoveContainer" containerID="1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021435 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a6f3f958-ebef-4d11-be1e-1cd2d431006c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021517 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwdvp\" (UniqueName: \"kubernetes.io/projected/21fd222d-3101-4c49-bbca-611916a57ae8-kube-api-access-fwdvp\") pod \"nmstate-metrics-9b8c8685d-dxl94\" (UID: \"21fd222d-3101-4c49-bbca-611916a57ae8\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021549 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-dbus-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021575 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-ovs-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-nmstate-lock\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021648 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bnn\" (UniqueName: \"kubernetes.io/projected/a6f3f958-ebef-4d11-be1e-1cd2d431006c-kube-api-access-q8bnn\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021677 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22tj2\" (UniqueName: \"kubernetes.io/projected/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-kube-api-access-22tj2\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.044157 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a6f3f958-ebef-4d11-be1e-1cd2d431006c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.061726 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwdvp\" (UniqueName: \"kubernetes.io/projected/21fd222d-3101-4c49-bbca-611916a57ae8-kube-api-access-fwdvp\") pod \"nmstate-metrics-9b8c8685d-dxl94\" (UID: \"21fd222d-3101-4c49-bbca-611916a57ae8\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.061939 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bnn\" (UniqueName: \"kubernetes.io/projected/a6f3f958-ebef-4d11-be1e-1cd2d431006c-kube-api-access-q8bnn\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122574 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3bdd0e88-cfa4-410a-b619-7918a813120d-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122639 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-nmstate-lock\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122667 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122726 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22tj2\" (UniqueName: \"kubernetes.io/projected/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-kube-api-access-22tj2\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122763 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svx8d\" (UniqueName: \"kubernetes.io/projected/3bdd0e88-cfa4-410a-b619-7918a813120d-kube-api-access-svx8d\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122860 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/2.log" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122798 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-nmstate-lock\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-dbus-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.123051 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-ovs-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.123137 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-ovs-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.123317 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-dbus-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.125026 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerStarted","Data":"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95"} Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.147456 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22tj2\" (UniqueName: \"kubernetes.io/projected/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-kube-api-access-22tj2\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.165974 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.172668 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-d7dd7d448-jtlk5"] Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.173464 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.189082 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d7dd7d448-jtlk5"] Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.195767 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224186 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224257 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-oauth-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224286 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-oauth-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224308 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svx8d\" (UniqueName: \"kubernetes.io/projected/3bdd0e88-cfa4-410a-b619-7918a813120d-kube-api-access-svx8d\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224332 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-service-ca\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224366 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-trusted-ca-bundle\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224389 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbzh\" (UniqueName: \"kubernetes.io/projected/1d701525-fad2-4a68-8594-a7d5020c6883-kube-api-access-dkbzh\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224417 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3bdd0e88-cfa4-410a-b619-7918a813120d-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: E0320 07:05:29.224419 5136 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224471 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224448 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-console-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: E0320 07:05:29.224555 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert podName:3bdd0e88-cfa4-410a-b619-7918a813120d nodeName:}" failed. No retries permitted until 2026-03-20 07:05:29.724507657 +0000 UTC m=+961.983818808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-rsxkf" (UID: "3bdd0e88-cfa4-410a-b619-7918a813120d") : secret "plugin-serving-cert" not found Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.225351 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.225473 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3bdd0e88-cfa4-410a-b619-7918a813120d-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.243118 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svx8d\" (UniqueName: \"kubernetes.io/projected/3bdd0e88-cfa4-410a-b619-7918a813120d-kube-api-access-svx8d\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: W0320 07:05:29.293026 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43a9811e_7a36_4f11_9f02_ac3e4c00c42d.slice/crio-709dfa8c7706b93d121ad6665b59eb754d82e9491db881215c3bda46ec7c5c70 WatchSource:0}: Error finding container 709dfa8c7706b93d121ad6665b59eb754d82e9491db881215c3bda46ec7c5c70: Status 404 returned error can't find the container with id 709dfa8c7706b93d121ad6665b59eb754d82e9491db881215c3bda46ec7c5c70 Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326656 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-console-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326722 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326774 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-oauth-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326797 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-oauth-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326854 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-service-ca\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326886 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-trusted-ca-bundle\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326905 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbzh\" (UniqueName: \"kubernetes.io/projected/1d701525-fad2-4a68-8594-a7d5020c6883-kube-api-access-dkbzh\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.327961 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-console-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.328793 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-oauth-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.328912 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-service-ca\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.329576 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-trusted-ca-bundle\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.333678 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-oauth-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.334245 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.342837 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbzh\" (UniqueName: \"kubernetes.io/projected/1d701525-fad2-4a68-8594-a7d5020c6883-kube-api-access-dkbzh\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.378646 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94"] Mar 20 07:05:29 crc kubenswrapper[5136]: W0320 07:05:29.385507 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21fd222d_3101_4c49_bbca_611916a57ae8.slice/crio-615f384adc63751c0e3815a5e1dda99c0f2ca809e1c494525e30531c7c0a70d5 WatchSource:0}: Error finding container 615f384adc63751c0e3815a5e1dda99c0f2ca809e1c494525e30531c7c0a70d5: Status 404 returned error can't find the container with id 615f384adc63751c0e3815a5e1dda99c0f2ca809e1c494525e30531c7c0a70d5 Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.452393 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-k7799"] Mar 20 07:05:29 crc kubenswrapper[5136]: W0320 07:05:29.457290 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6f3f958_ebef_4d11_be1e_1cd2d431006c.slice/crio-7fdd004d3e27527e5b18765e50f822e124239301b750c06ebc8d50d713272b52 WatchSource:0}: Error finding container 7fdd004d3e27527e5b18765e50f822e124239301b750c06ebc8d50d713272b52: Status 404 returned error can't find the container with id 7fdd004d3e27527e5b18765e50f822e124239301b750c06ebc8d50d713272b52 Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.497500 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.681968 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d7dd7d448-jtlk5"] Mar 20 07:05:29 crc kubenswrapper[5136]: W0320 07:05:29.692703 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d701525_fad2_4a68_8594_a7d5020c6883.slice/crio-7873cdb5c88cce32d3e577cf74b78c7f19428662efa1a82d94c373e220f588c9 WatchSource:0}: Error finding container 7873cdb5c88cce32d3e577cf74b78c7f19428662efa1a82d94c373e220f588c9: Status 404 returned error can't find the container with id 7873cdb5c88cce32d3e577cf74b78c7f19428662efa1a82d94c373e220f588c9 Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.733128 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.738759 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.914859 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.099860 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf"] Mar 20 07:05:30 crc kubenswrapper[5136]: W0320 07:05:30.112026 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bdd0e88_cfa4_410a_b619_7918a813120d.slice/crio-958e00c6e35162a8b8c7eb9336c6f7853a82a87997f76b8dc4a11c8834c3dbd2 WatchSource:0}: Error finding container 958e00c6e35162a8b8c7eb9336c6f7853a82a87997f76b8dc4a11c8834c3dbd2: Status 404 returned error can't find the container with id 958e00c6e35162a8b8c7eb9336c6f7853a82a87997f76b8dc4a11c8834c3dbd2 Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.136288 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" event={"ID":"3bdd0e88-cfa4-410a-b619-7918a813120d","Type":"ContainerStarted","Data":"958e00c6e35162a8b8c7eb9336c6f7853a82a87997f76b8dc4a11c8834c3dbd2"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.137625 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" event={"ID":"21fd222d-3101-4c49-bbca-611916a57ae8","Type":"ContainerStarted","Data":"615f384adc63751c0e3815a5e1dda99c0f2ca809e1c494525e30531c7c0a70d5"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.138564 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" event={"ID":"a6f3f958-ebef-4d11-be1e-1cd2d431006c","Type":"ContainerStarted","Data":"7fdd004d3e27527e5b18765e50f822e124239301b750c06ebc8d50d713272b52"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.139758 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d7dd7d448-jtlk5" event={"ID":"1d701525-fad2-4a68-8594-a7d5020c6883","Type":"ContainerStarted","Data":"ada7116ce017937b8f0a9a111686b7c7a64ae8609eb962c8ceca25296b62bd8e"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.139784 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d7dd7d448-jtlk5" event={"ID":"1d701525-fad2-4a68-8594-a7d5020c6883","Type":"ContainerStarted","Data":"7873cdb5c88cce32d3e577cf74b78c7f19428662efa1a82d94c373e220f588c9"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.140766 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7bqsc" event={"ID":"43a9811e-7a36-4f11-9f02-ac3e4c00c42d","Type":"ContainerStarted","Data":"709dfa8c7706b93d121ad6665b59eb754d82e9491db881215c3bda46ec7c5c70"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.145301 5136 generic.go:334] "Generic (PLEG): container finished" podID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerID="99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95" exitCode=0 Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.145330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerDied","Data":"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.163038 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-d7dd7d448-jtlk5" podStartSLOduration=1.163017706 podStartE2EDuration="1.163017706s" podCreationTimestamp="2026-03-20 07:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:05:30.159068823 +0000 UTC m=+962.418380004" watchObservedRunningTime="2026-03-20 07:05:30.163017706 +0000 UTC m=+962.422328867" Mar 20 07:05:31 crc kubenswrapper[5136]: I0320 07:05:31.157086 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerStarted","Data":"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6"} Mar 20 07:05:31 crc kubenswrapper[5136]: I0320 07:05:31.173112 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5rs9j" podStartSLOduration=2.643470388 podStartE2EDuration="5.173089568s" podCreationTimestamp="2026-03-20 07:05:26 +0000 UTC" firstStartedPulling="2026-03-20 07:05:28.112476548 +0000 UTC m=+960.371787709" lastFinishedPulling="2026-03-20 07:05:30.642095728 +0000 UTC m=+962.901406889" observedRunningTime="2026-03-20 07:05:31.172290134 +0000 UTC m=+963.431601295" watchObservedRunningTime="2026-03-20 07:05:31.173089568 +0000 UTC m=+963.432400719" Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.173186 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" event={"ID":"a6f3f958-ebef-4d11-be1e-1cd2d431006c","Type":"ContainerStarted","Data":"e58a00bc58c6fd5a4a4e721567778597409a6f4dc3ca2e1856e95c2cfd9d8bc9"} Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.173987 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.176702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7bqsc" event={"ID":"43a9811e-7a36-4f11-9f02-ac3e4c00c42d","Type":"ContainerStarted","Data":"53317609cf6b412238f8997f579f91bb64920e388729cadbb6fd564056bcf41f"} Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.176868 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.178712 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" event={"ID":"3bdd0e88-cfa4-410a-b619-7918a813120d","Type":"ContainerStarted","Data":"38a3dcf7fe14a3caebdc8d3d4f563149c7861e752afc066b5ef4a3d2abd15437"} Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.180572 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" event={"ID":"21fd222d-3101-4c49-bbca-611916a57ae8","Type":"ContainerStarted","Data":"cfc87f4736255fedf55915c3a92a5749bd2cefc197e5ea1195c401a8324f1128"} Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.197570 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" podStartSLOduration=1.959258296 podStartE2EDuration="5.197533599s" podCreationTimestamp="2026-03-20 07:05:28 +0000 UTC" firstStartedPulling="2026-03-20 07:05:29.460395174 +0000 UTC m=+961.719706325" lastFinishedPulling="2026-03-20 07:05:32.698670477 +0000 UTC m=+964.957981628" observedRunningTime="2026-03-20 07:05:33.189188297 +0000 UTC m=+965.448499448" watchObservedRunningTime="2026-03-20 07:05:33.197533599 +0000 UTC m=+965.456844760" Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.239284 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7bqsc" podStartSLOduration=1.834896477 podStartE2EDuration="5.239264287s" podCreationTimestamp="2026-03-20 07:05:28 +0000 UTC" firstStartedPulling="2026-03-20 07:05:29.302975998 +0000 UTC m=+961.562287149" lastFinishedPulling="2026-03-20 07:05:32.707343808 +0000 UTC m=+964.966654959" observedRunningTime="2026-03-20 07:05:33.238596467 +0000 UTC m=+965.497907618" watchObservedRunningTime="2026-03-20 07:05:33.239264287 +0000 UTC m=+965.498575438" Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.260585 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" podStartSLOduration=2.676025173 podStartE2EDuration="5.260567536s" podCreationTimestamp="2026-03-20 07:05:28 +0000 UTC" firstStartedPulling="2026-03-20 07:05:30.113725721 +0000 UTC m=+962.373036872" lastFinishedPulling="2026-03-20 07:05:32.698268084 +0000 UTC m=+964.957579235" observedRunningTime="2026-03-20 07:05:33.258332786 +0000 UTC m=+965.517643937" watchObservedRunningTime="2026-03-20 07:05:33.260567536 +0000 UTC m=+965.519878687" Mar 20 07:05:35 crc kubenswrapper[5136]: I0320 07:05:35.198850 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" event={"ID":"21fd222d-3101-4c49-bbca-611916a57ae8","Type":"ContainerStarted","Data":"bf8589fddb236ddf7aa044ac3b5d89aef5a7acfd6d11b710bef35338be588442"} Mar 20 07:05:36 crc kubenswrapper[5136]: I0320 07:05:36.622708 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:36 crc kubenswrapper[5136]: I0320 07:05:36.622754 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:36 crc kubenswrapper[5136]: I0320 07:05:36.694158 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:36 crc kubenswrapper[5136]: I0320 07:05:36.720224 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" podStartSLOduration=3.186077606 podStartE2EDuration="8.720201078s" podCreationTimestamp="2026-03-20 07:05:28 +0000 UTC" firstStartedPulling="2026-03-20 07:05:29.388577262 +0000 UTC m=+961.647888413" lastFinishedPulling="2026-03-20 07:05:34.922700734 +0000 UTC m=+967.182011885" observedRunningTime="2026-03-20 07:05:35.227272675 +0000 UTC m=+967.486583826" watchObservedRunningTime="2026-03-20 07:05:36.720201078 +0000 UTC m=+968.979512269" Mar 20 07:05:37 crc kubenswrapper[5136]: I0320 07:05:37.276079 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:37 crc kubenswrapper[5136]: I0320 07:05:37.330120 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.228649 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5rs9j" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="registry-server" containerID="cri-o://b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6" gracePeriod=2 Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.249073 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.498323 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.498636 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.507793 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.645335 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.670677 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content\") pod \"69b32ea3-4438-4807-9a81-41026ec34ad8\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.670845 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities\") pod \"69b32ea3-4438-4807-9a81-41026ec34ad8\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.670887 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ldh9\" (UniqueName: \"kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9\") pod \"69b32ea3-4438-4807-9a81-41026ec34ad8\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.671664 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities" (OuterVolumeSpecName: "utilities") pod "69b32ea3-4438-4807-9a81-41026ec34ad8" (UID: "69b32ea3-4438-4807-9a81-41026ec34ad8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.678587 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9" (OuterVolumeSpecName: "kube-api-access-7ldh9") pod "69b32ea3-4438-4807-9a81-41026ec34ad8" (UID: "69b32ea3-4438-4807-9a81-41026ec34ad8"). InnerVolumeSpecName "kube-api-access-7ldh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.693450 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69b32ea3-4438-4807-9a81-41026ec34ad8" (UID: "69b32ea3-4438-4807-9a81-41026ec34ad8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.772675 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.772727 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.772752 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ldh9\" (UniqueName: \"kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.236296 5136 generic.go:334] "Generic (PLEG): container finished" podID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerID="b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6" exitCode=0 Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.236377 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.236369 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerDied","Data":"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6"} Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.236904 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerDied","Data":"3ac1ac65683420394fa986da7d4411ae731402826717eacf8969f7dd09435281"} Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.236936 5136 scope.go:117] "RemoveContainer" containerID="b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.243653 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.267503 5136 scope.go:117] "RemoveContainer" containerID="99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.287538 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.294998 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.304094 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.312190 5136 scope.go:117] "RemoveContainer" containerID="bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.344400 5136 scope.go:117] "RemoveContainer" containerID="b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6" Mar 20 07:05:40 crc kubenswrapper[5136]: E0320 07:05:40.344913 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6\": container with ID starting with b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6 not found: ID does not exist" containerID="b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.344943 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6"} err="failed to get container status \"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6\": rpc error: code = NotFound desc = could not find container \"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6\": container with ID starting with b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6 not found: ID does not exist" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.344962 5136 scope.go:117] "RemoveContainer" containerID="99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95" Mar 20 07:05:40 crc kubenswrapper[5136]: E0320 07:05:40.349011 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95\": container with ID starting with 99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95 not found: ID does not exist" containerID="99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.349079 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95"} err="failed to get container status \"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95\": rpc error: code = NotFound desc = could not find container \"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95\": container with ID starting with 99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95 not found: ID does not exist" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.349120 5136 scope.go:117] "RemoveContainer" containerID="bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea" Mar 20 07:05:40 crc kubenswrapper[5136]: E0320 07:05:40.349465 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea\": container with ID starting with bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea not found: ID does not exist" containerID="bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.349491 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea"} err="failed to get container status \"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea\": rpc error: code = NotFound desc = could not find container \"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea\": container with ID starting with bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea not found: ID does not exist" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.403222 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" path="/var/lib/kubelet/pods/69b32ea3-4438-4807-9a81-41026ec34ad8/volumes" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.503658 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:42 crc kubenswrapper[5136]: E0320 07:05:42.504659 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="extract-utilities" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.504691 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="extract-utilities" Mar 20 07:05:42 crc kubenswrapper[5136]: E0320 07:05:42.504720 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="extract-content" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.504737 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="extract-content" Mar 20 07:05:42 crc kubenswrapper[5136]: E0320 07:05:42.504775 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="registry-server" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.504791 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="registry-server" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.505083 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="registry-server" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.506854 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.511621 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.609352 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.609399 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.609425 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh64q\" (UniqueName: \"kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.710852 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.711073 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.711143 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh64q\" (UniqueName: \"kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.711435 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.711798 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.732007 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh64q\" (UniqueName: \"kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.837896 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:43 crc kubenswrapper[5136]: I0320 07:05:43.111984 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:43 crc kubenswrapper[5136]: W0320 07:05:43.118445 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65f83417_5dc9_4526_bcbe_5927b6ccdd8a.slice/crio-c993b594e4ab541537e13f505766f2f214c3f366d637f3c7b4612000f5d64348 WatchSource:0}: Error finding container c993b594e4ab541537e13f505766f2f214c3f366d637f3c7b4612000f5d64348: Status 404 returned error can't find the container with id c993b594e4ab541537e13f505766f2f214c3f366d637f3c7b4612000f5d64348 Mar 20 07:05:43 crc kubenswrapper[5136]: I0320 07:05:43.255862 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerStarted","Data":"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da"} Mar 20 07:05:43 crc kubenswrapper[5136]: I0320 07:05:43.255913 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerStarted","Data":"c993b594e4ab541537e13f505766f2f214c3f366d637f3c7b4612000f5d64348"} Mar 20 07:05:44 crc kubenswrapper[5136]: I0320 07:05:44.264953 5136 generic.go:334] "Generic (PLEG): container finished" podID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerID="560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da" exitCode=0 Mar 20 07:05:44 crc kubenswrapper[5136]: I0320 07:05:44.265023 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerDied","Data":"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da"} Mar 20 07:05:46 crc kubenswrapper[5136]: I0320 07:05:46.279149 5136 generic.go:334] "Generic (PLEG): container finished" podID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerID="53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0" exitCode=0 Mar 20 07:05:46 crc kubenswrapper[5136]: I0320 07:05:46.279211 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerDied","Data":"53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0"} Mar 20 07:05:47 crc kubenswrapper[5136]: I0320 07:05:47.287096 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerStarted","Data":"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836"} Mar 20 07:05:47 crc kubenswrapper[5136]: I0320 07:05:47.314336 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bv4vd" podStartSLOduration=2.913591146 podStartE2EDuration="5.314314506s" podCreationTimestamp="2026-03-20 07:05:42 +0000 UTC" firstStartedPulling="2026-03-20 07:05:44.267055233 +0000 UTC m=+976.526366424" lastFinishedPulling="2026-03-20 07:05:46.667778613 +0000 UTC m=+978.927089784" observedRunningTime="2026-03-20 07:05:47.30713048 +0000 UTC m=+979.566441671" watchObservedRunningTime="2026-03-20 07:05:47.314314506 +0000 UTC m=+979.573625677" Mar 20 07:05:49 crc kubenswrapper[5136]: I0320 07:05:49.201703 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:52 crc kubenswrapper[5136]: I0320 07:05:52.838113 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:52 crc kubenswrapper[5136]: I0320 07:05:52.838900 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:52 crc kubenswrapper[5136]: I0320 07:05:52.882387 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:53 crc kubenswrapper[5136]: I0320 07:05:53.361443 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:53 crc kubenswrapper[5136]: I0320 07:05:53.411611 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:55 crc kubenswrapper[5136]: I0320 07:05:55.333586 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bv4vd" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="registry-server" containerID="cri-o://54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836" gracePeriod=2 Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.007909 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.183063 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities\") pod \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.183101 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh64q\" (UniqueName: \"kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q\") pod \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.183131 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content\") pod \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.185673 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities" (OuterVolumeSpecName: "utilities") pod "65f83417-5dc9-4526-bcbe-5927b6ccdd8a" (UID: "65f83417-5dc9-4526-bcbe-5927b6ccdd8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.190411 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q" (OuterVolumeSpecName: "kube-api-access-mh64q") pod "65f83417-5dc9-4526-bcbe-5927b6ccdd8a" (UID: "65f83417-5dc9-4526-bcbe-5927b6ccdd8a"). InnerVolumeSpecName "kube-api-access-mh64q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.248581 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65f83417-5dc9-4526-bcbe-5927b6ccdd8a" (UID: "65f83417-5dc9-4526-bcbe-5927b6ccdd8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.284406 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.284449 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh64q\" (UniqueName: \"kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.284463 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.340339 5136 generic.go:334] "Generic (PLEG): container finished" podID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerID="54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836" exitCode=0 Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.340381 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerDied","Data":"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836"} Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.340406 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerDied","Data":"c993b594e4ab541537e13f505766f2f214c3f366d637f3c7b4612000f5d64348"} Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.340422 5136 scope.go:117] "RemoveContainer" containerID="54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.340512 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.356507 5136 scope.go:117] "RemoveContainer" containerID="53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.365627 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.370350 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.389977 5136 scope.go:117] "RemoveContainer" containerID="560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.402616 5136 scope.go:117] "RemoveContainer" containerID="54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836" Mar 20 07:05:56 crc kubenswrapper[5136]: E0320 07:05:56.403048 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836\": container with ID starting with 54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836 not found: ID does not exist" containerID="54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.403082 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836"} err="failed to get container status \"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836\": rpc error: code = NotFound desc = could not find container \"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836\": container with ID starting with 54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836 not found: ID does not exist" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.403107 5136 scope.go:117] "RemoveContainer" containerID="53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0" Mar 20 07:05:56 crc kubenswrapper[5136]: E0320 07:05:56.403726 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0\": container with ID starting with 53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0 not found: ID does not exist" containerID="53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.403756 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0"} err="failed to get container status \"53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0\": rpc error: code = NotFound desc = could not find container \"53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0\": container with ID starting with 53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0 not found: ID does not exist" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.403775 5136 scope.go:117] "RemoveContainer" containerID="560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.404036 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" path="/var/lib/kubelet/pods/65f83417-5dc9-4526-bcbe-5927b6ccdd8a/volumes" Mar 20 07:05:56 crc kubenswrapper[5136]: E0320 07:05:56.404187 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da\": container with ID starting with 560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da not found: ID does not exist" containerID="560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.404312 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da"} err="failed to get container status \"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da\": rpc error: code = NotFound desc = could not find container \"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da\": container with ID starting with 560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da not found: ID does not exist" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.138312 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566506-bbg6r"] Mar 20 07:06:00 crc kubenswrapper[5136]: E0320 07:06:00.139217 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="extract-utilities" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.139234 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="extract-utilities" Mar 20 07:06:00 crc kubenswrapper[5136]: E0320 07:06:00.139249 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="registry-server" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.139258 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="registry-server" Mar 20 07:06:00 crc kubenswrapper[5136]: E0320 07:06:00.139275 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="extract-content" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.139284 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="extract-content" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.139413 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="registry-server" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.139880 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.145947 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566506-bbg6r"] Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.148031 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.148304 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.151271 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.213053 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.214679 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.228507 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.232698 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmm5s\" (UniqueName: \"kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s\") pod \"auto-csr-approver-29566506-bbg6r\" (UID: \"ca3533ad-761e-45d8-8a1a-0e679b602e08\") " pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.334354 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.334396 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmm5s\" (UniqueName: \"kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s\") pod \"auto-csr-approver-29566506-bbg6r\" (UID: \"ca3533ad-761e-45d8-8a1a-0e679b602e08\") " pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.334420 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.334440 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfl2\" (UniqueName: \"kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.352959 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmm5s\" (UniqueName: \"kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s\") pod \"auto-csr-approver-29566506-bbg6r\" (UID: \"ca3533ad-761e-45d8-8a1a-0e679b602e08\") " pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.436316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.436380 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.436409 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfl2\" (UniqueName: \"kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.436871 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.436886 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.454924 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfl2\" (UniqueName: \"kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.467567 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.531152 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.829608 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.981239 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566506-bbg6r"] Mar 20 07:06:01 crc kubenswrapper[5136]: W0320 07:06:01.013468 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca3533ad_761e_45d8_8a1a_0e679b602e08.slice/crio-5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8 WatchSource:0}: Error finding container 5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8: Status 404 returned error can't find the container with id 5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8 Mar 20 07:06:01 crc kubenswrapper[5136]: I0320 07:06:01.374723 5136 generic.go:334] "Generic (PLEG): container finished" podID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerID="4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff" exitCode=0 Mar 20 07:06:01 crc kubenswrapper[5136]: I0320 07:06:01.374765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerDied","Data":"4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff"} Mar 20 07:06:01 crc kubenswrapper[5136]: I0320 07:06:01.374823 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerStarted","Data":"b53b093dff156948f43b9de07506ee639a75c958288136288faaca5060f1f42b"} Mar 20 07:06:01 crc kubenswrapper[5136]: I0320 07:06:01.376789 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" event={"ID":"ca3533ad-761e-45d8-8a1a-0e679b602e08","Type":"ContainerStarted","Data":"5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8"} Mar 20 07:06:02 crc kubenswrapper[5136]: I0320 07:06:02.383379 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" event={"ID":"ca3533ad-761e-45d8-8a1a-0e679b602e08","Type":"ContainerStarted","Data":"6bbf8fa191e070a1c91f2d1ea94b4f26f7559168925696c34903d08a5a0065c5"} Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.175233 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" podStartSLOduration=2.115247376 podStartE2EDuration="3.175214973s" podCreationTimestamp="2026-03-20 07:06:00 +0000 UTC" firstStartedPulling="2026-03-20 07:06:01.015612775 +0000 UTC m=+993.274923926" lastFinishedPulling="2026-03-20 07:06:02.075580372 +0000 UTC m=+994.334891523" observedRunningTime="2026-03-20 07:06:02.406287282 +0000 UTC m=+994.665598433" watchObservedRunningTime="2026-03-20 07:06:03.175214973 +0000 UTC m=+995.434526124" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.177659 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn"] Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.178623 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.180203 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.188892 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn"] Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.374209 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.374261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wvd\" (UniqueName: \"kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.374383 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.392557 5136 generic.go:334] "Generic (PLEG): container finished" podID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerID="eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7" exitCode=0 Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.392652 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerDied","Data":"eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7"} Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.396071 5136 generic.go:334] "Generic (PLEG): container finished" podID="ca3533ad-761e-45d8-8a1a-0e679b602e08" containerID="6bbf8fa191e070a1c91f2d1ea94b4f26f7559168925696c34903d08a5a0065c5" exitCode=0 Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.396134 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" event={"ID":"ca3533ad-761e-45d8-8a1a-0e679b602e08","Type":"ContainerDied","Data":"6bbf8fa191e070a1c91f2d1ea94b4f26f7559168925696c34903d08a5a0065c5"} Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.474996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.475053 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wvd\" (UniqueName: \"kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.475168 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.475735 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.475800 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.492795 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wvd\" (UniqueName: \"kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.536436 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.931845 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn"] Mar 20 07:06:03 crc kubenswrapper[5136]: W0320 07:06:03.940295 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod900e35e2_638e_47f2_8943_1642ed3ccc59.slice/crio-6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af WatchSource:0}: Error finding container 6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af: Status 404 returned error can't find the container with id 6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.404976 5136 generic.go:334] "Generic (PLEG): container finished" podID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerID="b5b3a6ca6f0030d6bf6693d9a423335e56eaa24afaf72167c11ac747d010daf3" exitCode=0 Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.414241 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerStarted","Data":"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556"} Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.414282 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" event={"ID":"900e35e2-638e-47f2-8943-1642ed3ccc59","Type":"ContainerDied","Data":"b5b3a6ca6f0030d6bf6693d9a423335e56eaa24afaf72167c11ac747d010daf3"} Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.414299 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" event={"ID":"900e35e2-638e-47f2-8943-1642ed3ccc59","Type":"ContainerStarted","Data":"6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af"} Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.429530 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-22q85" podStartSLOduration=2.021861287 podStartE2EDuration="4.429510543s" podCreationTimestamp="2026-03-20 07:06:00 +0000 UTC" firstStartedPulling="2026-03-20 07:06:01.377021178 +0000 UTC m=+993.636332339" lastFinishedPulling="2026-03-20 07:06:03.784670444 +0000 UTC m=+996.043981595" observedRunningTime="2026-03-20 07:06:04.425891821 +0000 UTC m=+996.685202972" watchObservedRunningTime="2026-03-20 07:06:04.429510543 +0000 UTC m=+996.688821694" Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.666400 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.790617 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmm5s\" (UniqueName: \"kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s\") pod \"ca3533ad-761e-45d8-8a1a-0e679b602e08\" (UID: \"ca3533ad-761e-45d8-8a1a-0e679b602e08\") " Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.797116 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s" (OuterVolumeSpecName: "kube-api-access-zmm5s") pod "ca3533ad-761e-45d8-8a1a-0e679b602e08" (UID: "ca3533ad-761e-45d8-8a1a-0e679b602e08"). InnerVolumeSpecName "kube-api-access-zmm5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.892360 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmm5s\" (UniqueName: \"kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.364336 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-bjqjp" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerName="console" containerID="cri-o://e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a" gracePeriod=15 Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.411294 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" event={"ID":"ca3533ad-761e-45d8-8a1a-0e679b602e08","Type":"ContainerDied","Data":"5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8"} Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.411335 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.411448 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.451992 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566500-wd9ph"] Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.457591 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566500-wd9ph"] Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.744606 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bjqjp_83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448/console/0.log" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.744920 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903519 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903579 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcvzw\" (UniqueName: \"kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903655 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903674 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903692 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903737 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.904598 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.905506 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config" (OuterVolumeSpecName: "console-config") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.905548 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca" (OuterVolumeSpecName: "service-ca") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.905713 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.911786 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.915553 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw" (OuterVolumeSpecName: "kube-api-access-pcvzw") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "kube-api-access-pcvzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.917979 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004776 5136 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004807 5136 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004833 5136 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004841 5136 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004850 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcvzw\" (UniqueName: \"kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004859 5136 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004867 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.409527 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" path="/var/lib/kubelet/pods/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9/volumes" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421207 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bjqjp_83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448/console/0.log" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421288 5136 generic.go:334] "Generic (PLEG): container finished" podID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerID="e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a" exitCode=2 Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bjqjp" event={"ID":"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448","Type":"ContainerDied","Data":"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a"} Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421417 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bjqjp" event={"ID":"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448","Type":"ContainerDied","Data":"1705fa19bd8f5aa96bc704e7afa6e708e4641f98ce0af56ebad4536addf3960e"} Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421446 5136 scope.go:117] "RemoveContainer" containerID="e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421628 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.426294 5136 generic.go:334] "Generic (PLEG): container finished" podID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerID="ec34ad1ccc36a6e8304ae8f562dd09103af7e148747d2c2f87086dc152342de9" exitCode=0 Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.426333 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" event={"ID":"900e35e2-638e-47f2-8943-1642ed3ccc59","Type":"ContainerDied","Data":"ec34ad1ccc36a6e8304ae8f562dd09103af7e148747d2c2f87086dc152342de9"} Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.476993 5136 scope.go:117] "RemoveContainer" containerID="e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.479066 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 07:06:06 crc kubenswrapper[5136]: E0320 07:06:06.479371 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a\": container with ID starting with e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a not found: ID does not exist" containerID="e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.479515 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a"} err="failed to get container status \"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a\": rpc error: code = NotFound desc = could not find container \"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a\": container with ID starting with e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a not found: ID does not exist" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.480967 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 07:06:07 crc kubenswrapper[5136]: I0320 07:06:07.438330 5136 generic.go:334] "Generic (PLEG): container finished" podID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerID="dbb919c0995d72952254d8a8d763e19195320f3bc7f5006cdff5290948950f74" exitCode=0 Mar 20 07:06:07 crc kubenswrapper[5136]: I0320 07:06:07.438479 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" event={"ID":"900e35e2-638e-47f2-8943-1642ed3ccc59","Type":"ContainerDied","Data":"dbb919c0995d72952254d8a8d763e19195320f3bc7f5006cdff5290948950f74"} Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.410765 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" path="/var/lib/kubelet/pods/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448/volumes" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.743900 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.844285 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle\") pod \"900e35e2-638e-47f2-8943-1642ed3ccc59\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.844326 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util\") pod \"900e35e2-638e-47f2-8943-1642ed3ccc59\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.844483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7wvd\" (UniqueName: \"kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd\") pod \"900e35e2-638e-47f2-8943-1642ed3ccc59\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.845669 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle" (OuterVolumeSpecName: "bundle") pod "900e35e2-638e-47f2-8943-1642ed3ccc59" (UID: "900e35e2-638e-47f2-8943-1642ed3ccc59"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.851976 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd" (OuterVolumeSpecName: "kube-api-access-r7wvd") pod "900e35e2-638e-47f2-8943-1642ed3ccc59" (UID: "900e35e2-638e-47f2-8943-1642ed3ccc59"). InnerVolumeSpecName "kube-api-access-r7wvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.865788 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util" (OuterVolumeSpecName: "util") pod "900e35e2-638e-47f2-8943-1642ed3ccc59" (UID: "900e35e2-638e-47f2-8943-1642ed3ccc59"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.947590 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7wvd\" (UniqueName: \"kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.947620 5136 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.947633 5136 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:09 crc kubenswrapper[5136]: I0320 07:06:09.474600 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" event={"ID":"900e35e2-638e-47f2-8943-1642ed3ccc59","Type":"ContainerDied","Data":"6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af"} Mar 20 07:06:09 crc kubenswrapper[5136]: I0320 07:06:09.474650 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af" Mar 20 07:06:09 crc kubenswrapper[5136]: I0320 07:06:09.474744 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:09 crc kubenswrapper[5136]: E0320 07:06:09.519980 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod900e35e2_638e_47f2_8943_1642ed3ccc59.slice\": RecentStats: unable to find data in memory cache]" Mar 20 07:06:10 crc kubenswrapper[5136]: I0320 07:06:10.532073 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:10 crc kubenswrapper[5136]: I0320 07:06:10.532390 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:10 crc kubenswrapper[5136]: I0320 07:06:10.577052 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:11 crc kubenswrapper[5136]: I0320 07:06:11.543296 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:12 crc kubenswrapper[5136]: I0320 07:06:12.730105 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:13 crc kubenswrapper[5136]: I0320 07:06:13.495149 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-22q85" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="registry-server" containerID="cri-o://aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556" gracePeriod=2 Mar 20 07:06:13 crc kubenswrapper[5136]: I0320 07:06:13.855785 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.010540 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content\") pod \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.010629 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities\") pod \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.010659 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwfl2\" (UniqueName: \"kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2\") pod \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.011669 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities" (OuterVolumeSpecName: "utilities") pod "33b79f09-8ecc-4d05-87a7-94aa63e461a1" (UID: "33b79f09-8ecc-4d05-87a7-94aa63e461a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.017712 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2" (OuterVolumeSpecName: "kube-api-access-pwfl2") pod "33b79f09-8ecc-4d05-87a7-94aa63e461a1" (UID: "33b79f09-8ecc-4d05-87a7-94aa63e461a1"). InnerVolumeSpecName "kube-api-access-pwfl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.086869 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33b79f09-8ecc-4d05-87a7-94aa63e461a1" (UID: "33b79f09-8ecc-4d05-87a7-94aa63e461a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.111678 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.111714 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwfl2\" (UniqueName: \"kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.111729 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.503223 5136 generic.go:334] "Generic (PLEG): container finished" podID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerID="aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556" exitCode=0 Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.503314 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.503319 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerDied","Data":"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556"} Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.504049 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerDied","Data":"b53b093dff156948f43b9de07506ee639a75c958288136288faaca5060f1f42b"} Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.504082 5136 scope.go:117] "RemoveContainer" containerID="aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.523862 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.525348 5136 scope.go:117] "RemoveContainer" containerID="eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.529716 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.546839 5136 scope.go:117] "RemoveContainer" containerID="4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.575723 5136 scope.go:117] "RemoveContainer" containerID="aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556" Mar 20 07:06:14 crc kubenswrapper[5136]: E0320 07:06:14.577237 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556\": container with ID starting with aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556 not found: ID does not exist" containerID="aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.577286 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556"} err="failed to get container status \"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556\": rpc error: code = NotFound desc = could not find container \"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556\": container with ID starting with aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556 not found: ID does not exist" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.577309 5136 scope.go:117] "RemoveContainer" containerID="eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7" Mar 20 07:06:14 crc kubenswrapper[5136]: E0320 07:06:14.577621 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7\": container with ID starting with eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7 not found: ID does not exist" containerID="eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.577658 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7"} err="failed to get container status \"eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7\": rpc error: code = NotFound desc = could not find container \"eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7\": container with ID starting with eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7 not found: ID does not exist" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.577675 5136 scope.go:117] "RemoveContainer" containerID="4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff" Mar 20 07:06:14 crc kubenswrapper[5136]: E0320 07:06:14.578117 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff\": container with ID starting with 4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff not found: ID does not exist" containerID="4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.578187 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff"} err="failed to get container status \"4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff\": rpc error: code = NotFound desc = could not find container \"4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff\": container with ID starting with 4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff not found: ID does not exist" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239026 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn"] Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239476 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="extract-content" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239486 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="extract-content" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239500 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerName="console" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239505 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerName="console" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239514 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="util" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239520 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="util" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239531 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3533ad-761e-45d8-8a1a-0e679b602e08" containerName="oc" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239539 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3533ad-761e-45d8-8a1a-0e679b602e08" containerName="oc" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239545 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="extract" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239550 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="extract" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239559 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="extract-utilities" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239566 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="extract-utilities" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239574 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="registry-server" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239580 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="registry-server" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239589 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="pull" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239594 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="pull" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239684 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="registry-server" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239692 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3533ad-761e-45d8-8a1a-0e679b602e08" containerName="oc" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239701 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerName="console" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239714 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="extract" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.240071 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.244418 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.244595 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.245074 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.247926 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.252574 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4kvhj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.256256 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn"] Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.340077 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-webhook-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.340174 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-apiservice-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.340467 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/8738cb21-39f9-4eeb-90fc-f512d95642f3-kube-api-access-82qlz\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.403339 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" path="/var/lib/kubelet/pods/33b79f09-8ecc-4d05-87a7-94aa63e461a1/volumes" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.441502 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/8738cb21-39f9-4eeb-90fc-f512d95642f3-kube-api-access-82qlz\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.441548 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-webhook-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.441584 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-apiservice-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.452851 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-apiservice-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.456456 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/8738cb21-39f9-4eeb-90fc-f512d95642f3-kube-api-access-82qlz\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.464695 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-webhook-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.555718 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.580276 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-787f65f959-lkczj"] Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.581121 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.582948 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.583384 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.583683 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bsmt8" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.645645 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfdst\" (UniqueName: \"kubernetes.io/projected/f9ad7722-3864-444d-92a1-235de7707fe4-kube-api-access-vfdst\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.645690 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-webhook-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.645763 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-apiservice-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.656389 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-787f65f959-lkczj"] Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.747253 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-apiservice-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.747647 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfdst\" (UniqueName: \"kubernetes.io/projected/f9ad7722-3864-444d-92a1-235de7707fe4-kube-api-access-vfdst\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.747676 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-webhook-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.753047 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-apiservice-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.753143 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-webhook-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.773962 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfdst\" (UniqueName: \"kubernetes.io/projected/f9ad7722-3864-444d-92a1-235de7707fe4-kube-api-access-vfdst\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.801853 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn"] Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.924916 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:17 crc kubenswrapper[5136]: I0320 07:06:17.175741 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-787f65f959-lkczj"] Mar 20 07:06:17 crc kubenswrapper[5136]: W0320 07:06:17.178718 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9ad7722_3864_444d_92a1_235de7707fe4.slice/crio-d0f3e032e3a10434097984e90c2cb1eddd6117692de0b2915f701fe2544b7331 WatchSource:0}: Error finding container d0f3e032e3a10434097984e90c2cb1eddd6117692de0b2915f701fe2544b7331: Status 404 returned error can't find the container with id d0f3e032e3a10434097984e90c2cb1eddd6117692de0b2915f701fe2544b7331 Mar 20 07:06:17 crc kubenswrapper[5136]: I0320 07:06:17.531798 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" event={"ID":"f9ad7722-3864-444d-92a1-235de7707fe4","Type":"ContainerStarted","Data":"d0f3e032e3a10434097984e90c2cb1eddd6117692de0b2915f701fe2544b7331"} Mar 20 07:06:17 crc kubenswrapper[5136]: I0320 07:06:17.532740 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" event={"ID":"8738cb21-39f9-4eeb-90fc-f512d95642f3","Type":"ContainerStarted","Data":"759aed0e0b086233ee262f294980df967caf8298aca7b9bf5d7cf8926c36d601"} Mar 20 07:06:21 crc kubenswrapper[5136]: I0320 07:06:21.565659 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" event={"ID":"8738cb21-39f9-4eeb-90fc-f512d95642f3","Type":"ContainerStarted","Data":"a2fdc59d0327c0c66c3ba10c85c0cf40992d46cf4893f53cf3095f2ef3e91d72"} Mar 20 07:06:21 crc kubenswrapper[5136]: I0320 07:06:21.566178 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:21 crc kubenswrapper[5136]: I0320 07:06:21.597796 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" podStartSLOduration=1.911725082 podStartE2EDuration="5.597771596s" podCreationTimestamp="2026-03-20 07:06:16 +0000 UTC" firstStartedPulling="2026-03-20 07:06:16.815018973 +0000 UTC m=+1009.074330124" lastFinishedPulling="2026-03-20 07:06:20.501065487 +0000 UTC m=+1012.760376638" observedRunningTime="2026-03-20 07:06:21.586598436 +0000 UTC m=+1013.845909607" watchObservedRunningTime="2026-03-20 07:06:21.597771596 +0000 UTC m=+1013.857082757" Mar 20 07:06:22 crc kubenswrapper[5136]: I0320 07:06:22.571761 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" event={"ID":"f9ad7722-3864-444d-92a1-235de7707fe4","Type":"ContainerStarted","Data":"38d6b2c2a1fc1a785e809cf5e6cb53cc76a1dd5bbc8e37365b9df844dc1ea77d"} Mar 20 07:06:22 crc kubenswrapper[5136]: I0320 07:06:22.592049 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" podStartSLOduration=1.690092353 podStartE2EDuration="6.592024443s" podCreationTimestamp="2026-03-20 07:06:16 +0000 UTC" firstStartedPulling="2026-03-20 07:06:17.182139075 +0000 UTC m=+1009.441450226" lastFinishedPulling="2026-03-20 07:06:22.084071165 +0000 UTC m=+1014.343382316" observedRunningTime="2026-03-20 07:06:22.586236541 +0000 UTC m=+1014.845547692" watchObservedRunningTime="2026-03-20 07:06:22.592024443 +0000 UTC m=+1014.851335594" Mar 20 07:06:23 crc kubenswrapper[5136]: I0320 07:06:23.578067 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:29 crc kubenswrapper[5136]: I0320 07:06:29.120251 5136 scope.go:117] "RemoveContainer" containerID="c700468779627f6961723a07d9133659d892564be897053e621e205bd14c1cbb" Mar 20 07:06:36 crc kubenswrapper[5136]: I0320 07:06:36.937234 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:56 crc kubenswrapper[5136]: I0320 07:06:56.558492 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.305888 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bjq5z"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.308195 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.309904 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.310098 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vlhfz" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.310253 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.316208 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.317070 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.318481 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.332782 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.381876 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nrftr"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.382999 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.385076 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.385121 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.385428 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.385621 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ngtp6" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.390917 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-dzzhq"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.391684 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.393058 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.404066 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-dzzhq"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495276 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-sockets\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495316 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mt6q\" (UniqueName: \"kubernetes.io/projected/4c981a48-1ae6-4c06-90ed-4333de6a14d2-kube-api-access-9mt6q\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495340 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/037785f1-4827-4473-8997-20cdc8fec776-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495378 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495399 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-conf\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495416 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-metrics-certs\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495433 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-cert\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495450 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d54436ca-ad6f-41c2-ae88-703f150229fc-metallb-excludel2\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495465 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk5jk\" (UniqueName: \"kubernetes.io/projected/11c03832-f8fc-4790-98f6-43290c528ce9-kube-api-access-bk5jk\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495487 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6592\" (UniqueName: \"kubernetes.io/projected/d54436ca-ad6f-41c2-ae88-703f150229fc-kube-api-access-n6592\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495504 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-metrics-certs\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495518 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-metrics\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495549 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljk9\" (UniqueName: \"kubernetes.io/projected/037785f1-4827-4473-8997-20cdc8fec776-kube-api-access-wljk9\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495565 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-reloader\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495582 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11c03832-f8fc-4790-98f6-43290c528ce9-frr-startup\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495599 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11c03832-f8fc-4790-98f6-43290c528ce9-metrics-certs\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597068 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljk9\" (UniqueName: \"kubernetes.io/projected/037785f1-4827-4473-8997-20cdc8fec776-kube-api-access-wljk9\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597116 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-reloader\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597155 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11c03832-f8fc-4790-98f6-43290c528ce9-frr-startup\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597200 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11c03832-f8fc-4790-98f6-43290c528ce9-metrics-certs\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597256 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-sockets\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597280 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mt6q\" (UniqueName: \"kubernetes.io/projected/4c981a48-1ae6-4c06-90ed-4333de6a14d2-kube-api-access-9mt6q\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597308 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/037785f1-4827-4473-8997-20cdc8fec776-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597390 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-conf\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-metrics-certs\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597441 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-cert\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597485 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d54436ca-ad6f-41c2-ae88-703f150229fc-metallb-excludel2\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597505 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5jk\" (UniqueName: \"kubernetes.io/projected/11c03832-f8fc-4790-98f6-43290c528ce9-kube-api-access-bk5jk\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597533 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6592\" (UniqueName: \"kubernetes.io/projected/d54436ca-ad6f-41c2-ae88-703f150229fc-kube-api-access-n6592\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597555 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-metrics-certs\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597577 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-metrics\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.598013 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-metrics\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.599479 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-conf\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.599487 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-reloader\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: E0320 07:06:57.599573 5136 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 20 07:06:57 crc kubenswrapper[5136]: E0320 07:06:57.599631 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist podName:d54436ca-ad6f-41c2-ae88-703f150229fc nodeName:}" failed. No retries permitted until 2026-03-20 07:06:58.099613481 +0000 UTC m=+1050.358924632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist") pod "speaker-nrftr" (UID: "d54436ca-ad6f-41c2-ae88-703f150229fc") : secret "metallb-memberlist" not found Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.599992 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-sockets\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.600069 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d54436ca-ad6f-41c2-ae88-703f150229fc-metallb-excludel2\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.600078 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11c03832-f8fc-4790-98f6-43290c528ce9-frr-startup\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.604066 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.604722 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11c03832-f8fc-4790-98f6-43290c528ce9-metrics-certs\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.605924 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-metrics-certs\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.618705 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-metrics-certs\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.618892 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mt6q\" (UniqueName: \"kubernetes.io/projected/4c981a48-1ae6-4c06-90ed-4333de6a14d2-kube-api-access-9mt6q\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.620187 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-cert\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.620507 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljk9\" (UniqueName: \"kubernetes.io/projected/037785f1-4827-4473-8997-20cdc8fec776-kube-api-access-wljk9\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.621344 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/037785f1-4827-4473-8997-20cdc8fec776-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.624761 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5jk\" (UniqueName: \"kubernetes.io/projected/11c03832-f8fc-4790-98f6-43290c528ce9-kube-api-access-bk5jk\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.630752 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.635218 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6592\" (UniqueName: \"kubernetes.io/projected/d54436ca-ad6f-41c2-ae88-703f150229fc-kube-api-access-n6592\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.702472 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.840490 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.913235 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-dzzhq"] Mar 20 07:06:57 crc kubenswrapper[5136]: W0320 07:06:57.922660 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c981a48_1ae6_4c06_90ed_4333de6a14d2.slice/crio-2c416ff9cd1d0533022eaeb0f20230c869f23068c427cf745763c3ff4bf61720 WatchSource:0}: Error finding container 2c416ff9cd1d0533022eaeb0f20230c869f23068c427cf745763c3ff4bf61720: Status 404 returned error can't find the container with id 2c416ff9cd1d0533022eaeb0f20230c869f23068c427cf745763c3ff4bf61720 Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.924014 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.106272 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.113244 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.294803 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nrftr" Mar 20 07:06:58 crc kubenswrapper[5136]: W0320 07:06:58.328142 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd54436ca_ad6f_41c2_ae88_703f150229fc.slice/crio-c8f50e75c4d1e4cedf4177fa913c54cc3bcc83d520993de37980264e1843e470 WatchSource:0}: Error finding container c8f50e75c4d1e4cedf4177fa913c54cc3bcc83d520993de37980264e1843e470: Status 404 returned error can't find the container with id c8f50e75c4d1e4cedf4177fa913c54cc3bcc83d520993de37980264e1843e470 Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.794901 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"d1bad7e1dff2a157c7e1f83d5dd7662fb06eda784212bc4e41086bdc8b3a561a"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.797502 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nrftr" event={"ID":"d54436ca-ad6f-41c2-ae88-703f150229fc","Type":"ContainerStarted","Data":"f8f8da005789d4bddcdf87716453b4f98ecb7198b70703bda4238755c115a4d4"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.797616 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nrftr" event={"ID":"d54436ca-ad6f-41c2-ae88-703f150229fc","Type":"ContainerStarted","Data":"c8f50e75c4d1e4cedf4177fa913c54cc3bcc83d520993de37980264e1843e470"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.801884 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-dzzhq" event={"ID":"4c981a48-1ae6-4c06-90ed-4333de6a14d2","Type":"ContainerStarted","Data":"fd8677088f487f52a728fb17dafdb51424c7a7676ecce30b8dda96546b601a9e"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.801999 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-dzzhq" event={"ID":"4c981a48-1ae6-4c06-90ed-4333de6a14d2","Type":"ContainerStarted","Data":"4f9ee5acf0afc61181e89beadce206d7aeed39bed3421a6eda6dab2fd267dcfd"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.802091 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.802149 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-dzzhq" event={"ID":"4c981a48-1ae6-4c06-90ed-4333de6a14d2","Type":"ContainerStarted","Data":"2c416ff9cd1d0533022eaeb0f20230c869f23068c427cf745763c3ff4bf61720"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.803364 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" event={"ID":"037785f1-4827-4473-8997-20cdc8fec776","Type":"ContainerStarted","Data":"7f23eb36bd47279e94a79c45a67e3d78e97ba659682ca797a1ccbda5dc37c90c"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.822124 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-dzzhq" podStartSLOduration=1.822104771 podStartE2EDuration="1.822104771s" podCreationTimestamp="2026-03-20 07:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:06:58.819051358 +0000 UTC m=+1051.078362519" watchObservedRunningTime="2026-03-20 07:06:58.822104771 +0000 UTC m=+1051.081415922" Mar 20 07:06:59 crc kubenswrapper[5136]: I0320 07:06:59.815307 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nrftr" event={"ID":"d54436ca-ad6f-41c2-ae88-703f150229fc","Type":"ContainerStarted","Data":"df5da50f2e79e1e5b420a5036d3e271000c056e41b9cbbd03c2046fd8d04a825"} Mar 20 07:06:59 crc kubenswrapper[5136]: I0320 07:06:59.834514 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nrftr" podStartSLOduration=2.834497656 podStartE2EDuration="2.834497656s" podCreationTimestamp="2026-03-20 07:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:06:59.832450234 +0000 UTC m=+1052.091761395" watchObservedRunningTime="2026-03-20 07:06:59.834497656 +0000 UTC m=+1052.093808807" Mar 20 07:07:00 crc kubenswrapper[5136]: I0320 07:07:00.829384 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nrftr" Mar 20 07:07:04 crc kubenswrapper[5136]: I0320 07:07:04.864674 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" event={"ID":"037785f1-4827-4473-8997-20cdc8fec776","Type":"ContainerStarted","Data":"a5627982b25a7799bbad056028bfa166a9488e5ecdb741bbeba842c695a2a028"} Mar 20 07:07:04 crc kubenswrapper[5136]: I0320 07:07:04.865303 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:07:04 crc kubenswrapper[5136]: I0320 07:07:04.866455 5136 generic.go:334] "Generic (PLEG): container finished" podID="11c03832-f8fc-4790-98f6-43290c528ce9" containerID="2e5a2f302966bf2ecc92b2e2219e62c994619babb2dfc81c294b39e99d06df05" exitCode=0 Mar 20 07:07:04 crc kubenswrapper[5136]: I0320 07:07:04.866495 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerDied","Data":"2e5a2f302966bf2ecc92b2e2219e62c994619babb2dfc81c294b39e99d06df05"} Mar 20 07:07:04 crc kubenswrapper[5136]: I0320 07:07:04.885539 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" podStartSLOduration=1.292712709 podStartE2EDuration="7.885523681s" podCreationTimestamp="2026-03-20 07:06:57 +0000 UTC" firstStartedPulling="2026-03-20 07:06:57.849283049 +0000 UTC m=+1050.108594200" lastFinishedPulling="2026-03-20 07:07:04.442094011 +0000 UTC m=+1056.701405172" observedRunningTime="2026-03-20 07:07:04.883766246 +0000 UTC m=+1057.143077397" watchObservedRunningTime="2026-03-20 07:07:04.885523681 +0000 UTC m=+1057.144834832" Mar 20 07:07:05 crc kubenswrapper[5136]: I0320 07:07:05.876105 5136 generic.go:334] "Generic (PLEG): container finished" podID="11c03832-f8fc-4790-98f6-43290c528ce9" containerID="c23405a6620278bd98783ee0b6d08e662006e28c393ebacf86397ec5a67ff1b4" exitCode=0 Mar 20 07:07:05 crc kubenswrapper[5136]: I0320 07:07:05.876196 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerDied","Data":"c23405a6620278bd98783ee0b6d08e662006e28c393ebacf86397ec5a67ff1b4"} Mar 20 07:07:06 crc kubenswrapper[5136]: I0320 07:07:06.883597 5136 generic.go:334] "Generic (PLEG): container finished" podID="11c03832-f8fc-4790-98f6-43290c528ce9" containerID="edab9a7cc4f60cec2c70af4613523b56e8a422eee050fd0ffe3ec77d73cc80b3" exitCode=0 Mar 20 07:07:06 crc kubenswrapper[5136]: I0320 07:07:06.883688 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerDied","Data":"edab9a7cc4f60cec2c70af4613523b56e8a422eee050fd0ffe3ec77d73cc80b3"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893450 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"bb68198ed84c2ab2ca3d4249c1e11ff36069ff3fff87bc27cb727a5a72b6cdac"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893750 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"88100f63a82b09bb00c22aaf406a57cfa5b991c4dc461d24046d6363a0d32834"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893765 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893775 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"f668e9edc46de0b468685a628a4ead5b0066b252e9cd5ffb367decb7478dc302"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893785 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"5a91ca9db460c4bdfa4411064eb8b22c0a8f917f0af7ea9a88dde128a9114094"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893792 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"772f401239653dc5968b39b347b76c2db1a0389dad4bbfe1ae37787eda7efbce"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893801 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"96a2ca0fb38c42657d88bc0b7fc1431bd88f07f62a1ee6fb97388eeef243651c"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.919347 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bjq5z" podStartSLOduration=4.534445887 podStartE2EDuration="10.919327939s" podCreationTimestamp="2026-03-20 07:06:57 +0000 UTC" firstStartedPulling="2026-03-20 07:06:58.031481664 +0000 UTC m=+1050.290792815" lastFinishedPulling="2026-03-20 07:07:04.416363676 +0000 UTC m=+1056.675674867" observedRunningTime="2026-03-20 07:07:07.913754277 +0000 UTC m=+1060.173065438" watchObservedRunningTime="2026-03-20 07:07:07.919327939 +0000 UTC m=+1060.178639110" Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.925229 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.965470 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:07:08 crc kubenswrapper[5136]: I0320 07:07:08.300767 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nrftr" Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.880128 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl"] Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.881869 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.884556 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.894440 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl"] Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.976369 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lcsd\" (UniqueName: \"kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.976455 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.976540 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.077623 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.077681 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lcsd\" (UniqueName: \"kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.077751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.078252 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.078579 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.096860 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lcsd\" (UniqueName: \"kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.201045 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.630531 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl"] Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.916885 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerID="9d4915209e9cd1aee15f5e7887b40473508d7888446113ef0a2ff2d007c2cb14" exitCode=0 Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.916981 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" event={"ID":"bd4f9716-cbae-44b8-ba7a-44aaa92dae66","Type":"ContainerDied","Data":"9d4915209e9cd1aee15f5e7887b40473508d7888446113ef0a2ff2d007c2cb14"} Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.917255 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" event={"ID":"bd4f9716-cbae-44b8-ba7a-44aaa92dae66","Type":"ContainerStarted","Data":"f1ded7a06cccfe2e68a9941c8a9b7358062eb8ecfb184de223c411b491e42ca8"} Mar 20 07:07:15 crc kubenswrapper[5136]: I0320 07:07:15.951511 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerID="2d839fb43f25dbf8d31e7a6535aafda786a1cab3db5f18c1caf2b0163209f4f0" exitCode=0 Mar 20 07:07:15 crc kubenswrapper[5136]: I0320 07:07:15.951620 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" event={"ID":"bd4f9716-cbae-44b8-ba7a-44aaa92dae66","Type":"ContainerDied","Data":"2d839fb43f25dbf8d31e7a6535aafda786a1cab3db5f18c1caf2b0163209f4f0"} Mar 20 07:07:16 crc kubenswrapper[5136]: I0320 07:07:16.960783 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerID="35ff8e67e1e936a1ca29c0207ad2c78e44c549b9363c2a0fedfee843535c8849" exitCode=0 Mar 20 07:07:16 crc kubenswrapper[5136]: I0320 07:07:16.960886 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" event={"ID":"bd4f9716-cbae-44b8-ba7a-44aaa92dae66","Type":"ContainerDied","Data":"35ff8e67e1e936a1ca29c0207ad2c78e44c549b9363c2a0fedfee843535c8849"} Mar 20 07:07:17 crc kubenswrapper[5136]: I0320 07:07:17.636749 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:07:17 crc kubenswrapper[5136]: I0320 07:07:17.706427 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:07:17 crc kubenswrapper[5136]: I0320 07:07:17.932770 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.330047 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.492965 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util\") pod \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.493063 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle\") pod \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.493083 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lcsd\" (UniqueName: \"kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd\") pod \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.494508 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle" (OuterVolumeSpecName: "bundle") pod "bd4f9716-cbae-44b8-ba7a-44aaa92dae66" (UID: "bd4f9716-cbae-44b8-ba7a-44aaa92dae66"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.498691 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd" (OuterVolumeSpecName: "kube-api-access-2lcsd") pod "bd4f9716-cbae-44b8-ba7a-44aaa92dae66" (UID: "bd4f9716-cbae-44b8-ba7a-44aaa92dae66"). InnerVolumeSpecName "kube-api-access-2lcsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.507575 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util" (OuterVolumeSpecName: "util") pod "bd4f9716-cbae-44b8-ba7a-44aaa92dae66" (UID: "bd4f9716-cbae-44b8-ba7a-44aaa92dae66"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.595839 5136 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util\") on node \"crc\" DevicePath \"\"" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.595885 5136 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.595903 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lcsd\" (UniqueName: \"kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd\") on node \"crc\" DevicePath \"\"" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.978910 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" event={"ID":"bd4f9716-cbae-44b8-ba7a-44aaa92dae66","Type":"ContainerDied","Data":"f1ded7a06cccfe2e68a9941c8a9b7358062eb8ecfb184de223c411b491e42ca8"} Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.978944 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ded7a06cccfe2e68a9941c8a9b7358062eb8ecfb184de223c411b491e42ca8" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.979003 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.258500 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb"] Mar 20 07:07:23 crc kubenswrapper[5136]: E0320 07:07:23.259247 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="pull" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.259261 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="pull" Mar 20 07:07:23 crc kubenswrapper[5136]: E0320 07:07:23.259275 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="extract" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.259281 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="extract" Mar 20 07:07:23 crc kubenswrapper[5136]: E0320 07:07:23.259293 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="util" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.259301 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="util" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.259417 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="extract" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.259883 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.264289 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.265319 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.266097 5136 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-xrv2l" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.278558 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb"] Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.455739 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbgw\" (UniqueName: \"kubernetes.io/projected/ab58c510-da95-4ce8-855c-f58a8f46c61d-kube-api-access-zzbgw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.455798 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab58c510-da95-4ce8-855c-f58a8f46c61d-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.557314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbgw\" (UniqueName: \"kubernetes.io/projected/ab58c510-da95-4ce8-855c-f58a8f46c61d-kube-api-access-zzbgw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.557374 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab58c510-da95-4ce8-855c-f58a8f46c61d-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.557994 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab58c510-da95-4ce8-855c-f58a8f46c61d-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.595685 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbgw\" (UniqueName: \"kubernetes.io/projected/ab58c510-da95-4ce8-855c-f58a8f46c61d-kube-api-access-zzbgw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.881501 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:24 crc kubenswrapper[5136]: I0320 07:07:24.394142 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb"] Mar 20 07:07:24 crc kubenswrapper[5136]: W0320 07:07:24.399896 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab58c510_da95_4ce8_855c_f58a8f46c61d.slice/crio-23fc4890c9842e18773a8c1937e803432ff7ccceba82dacbd577c0107e92a75b WatchSource:0}: Error finding container 23fc4890c9842e18773a8c1937e803432ff7ccceba82dacbd577c0107e92a75b: Status 404 returned error can't find the container with id 23fc4890c9842e18773a8c1937e803432ff7ccceba82dacbd577c0107e92a75b Mar 20 07:07:25 crc kubenswrapper[5136]: I0320 07:07:25.026958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" event={"ID":"ab58c510-da95-4ce8-855c-f58a8f46c61d","Type":"ContainerStarted","Data":"23fc4890c9842e18773a8c1937e803432ff7ccceba82dacbd577c0107e92a75b"} Mar 20 07:07:28 crc kubenswrapper[5136]: I0320 07:07:28.044291 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" event={"ID":"ab58c510-da95-4ce8-855c-f58a8f46c61d","Type":"ContainerStarted","Data":"9cc29c3201590a834800efd0e8e6097e0f38b6c636c8bcc98e7baa68724f2b66"} Mar 20 07:07:28 crc kubenswrapper[5136]: I0320 07:07:28.062093 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" podStartSLOduration=2.255530209 podStartE2EDuration="5.062076941s" podCreationTimestamp="2026-03-20 07:07:23 +0000 UTC" firstStartedPulling="2026-03-20 07:07:24.403530316 +0000 UTC m=+1076.662841467" lastFinishedPulling="2026-03-20 07:07:27.210077048 +0000 UTC m=+1079.469388199" observedRunningTime="2026-03-20 07:07:28.05977474 +0000 UTC m=+1080.319085891" watchObservedRunningTime="2026-03-20 07:07:28.062076941 +0000 UTC m=+1080.321388092" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.308933 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-4l568"] Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.310086 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.312097 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.314067 5136 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qnqnl" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.315992 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.319587 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-4l568"] Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.502785 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svhpg\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-kube-api-access-svhpg\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.502838 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.604642 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svhpg\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-kube-api-access-svhpg\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.604700 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.627194 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.640734 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svhpg\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-kube-api-access-svhpg\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.926663 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.260168 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4757p"] Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.261291 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.265610 5136 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tnttm" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.270561 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4757p"] Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.323647 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.323693 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9fc\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-kube-api-access-jm9fc\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.365635 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-4l568"] Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.424634 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.424694 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm9fc\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-kube-api-access-jm9fc\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.452144 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm9fc\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-kube-api-access-jm9fc\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.452879 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.619387 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:33 crc kubenswrapper[5136]: I0320 07:07:33.026143 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4757p"] Mar 20 07:07:33 crc kubenswrapper[5136]: I0320 07:07:33.074210 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" event={"ID":"f1c160ca-0866-46ab-859c-8557dc65e962","Type":"ContainerStarted","Data":"78cb965da31a3740eabcdcfe3c1cc27b79d2696623d6f7cdda2bdadbaad9c1d3"} Mar 20 07:07:33 crc kubenswrapper[5136]: I0320 07:07:33.075354 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" event={"ID":"6168deec-ad68-4f6d-9736-422a6c7ade08","Type":"ContainerStarted","Data":"3a3543afc12e100f2f81c758885cdf492960ee9ecd5540d0f6fd5b73ce99e7b9"} Mar 20 07:07:37 crc kubenswrapper[5136]: I0320 07:07:37.103259 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" event={"ID":"f1c160ca-0866-46ab-859c-8557dc65e962","Type":"ContainerStarted","Data":"0f96c13e08bb5e1ff2b805d043523072db3db1bc56d61c29ddb216130b808e62"} Mar 20 07:07:37 crc kubenswrapper[5136]: I0320 07:07:37.104587 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" event={"ID":"6168deec-ad68-4f6d-9736-422a6c7ade08","Type":"ContainerStarted","Data":"4ae7f07e7e6afff2c58b21e404effc6408c5c208755b778f164e2863559bb61c"} Mar 20 07:07:37 crc kubenswrapper[5136]: I0320 07:07:37.104761 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:37 crc kubenswrapper[5136]: I0320 07:07:37.121387 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" podStartSLOduration=1.823097585 podStartE2EDuration="5.121370108s" podCreationTimestamp="2026-03-20 07:07:32 +0000 UTC" firstStartedPulling="2026-03-20 07:07:33.034718456 +0000 UTC m=+1085.294029607" lastFinishedPulling="2026-03-20 07:07:36.332990979 +0000 UTC m=+1088.592302130" observedRunningTime="2026-03-20 07:07:37.120862262 +0000 UTC m=+1089.380173433" watchObservedRunningTime="2026-03-20 07:07:37.121370108 +0000 UTC m=+1089.380681259" Mar 20 07:07:37 crc kubenswrapper[5136]: I0320 07:07:37.144179 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" podStartSLOduration=2.182735645 podStartE2EDuration="6.144154691s" podCreationTimestamp="2026-03-20 07:07:31 +0000 UTC" firstStartedPulling="2026-03-20 07:07:32.370089337 +0000 UTC m=+1084.629400488" lastFinishedPulling="2026-03-20 07:07:36.331508373 +0000 UTC m=+1088.590819534" observedRunningTime="2026-03-20 07:07:37.139326982 +0000 UTC m=+1089.398638133" watchObservedRunningTime="2026-03-20 07:07:37.144154691 +0000 UTC m=+1089.403465842" Mar 20 07:07:41 crc kubenswrapper[5136]: I0320 07:07:41.928777 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.716417 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-d4w65"] Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.717545 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.722405 5136 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n7tnn" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.724776 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-d4w65"] Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.796282 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99bp\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-kube-api-access-v99bp\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.796401 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-bound-sa-token\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.897152 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-bound-sa-token\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.897220 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v99bp\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-kube-api-access-v99bp\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.916293 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-bound-sa-token\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.916777 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v99bp\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-kube-api-access-v99bp\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:44 crc kubenswrapper[5136]: I0320 07:07:44.053959 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:44 crc kubenswrapper[5136]: I0320 07:07:44.519567 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-d4w65"] Mar 20 07:07:44 crc kubenswrapper[5136]: W0320 07:07:44.523753 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb06e6b2d_fcba_4ba1_9ba1_82585032b382.slice/crio-6eb4011f78ae045d59af74438cab59981227263fa2accee84d4a320d6f6de289 WatchSource:0}: Error finding container 6eb4011f78ae045d59af74438cab59981227263fa2accee84d4a320d6f6de289: Status 404 returned error can't find the container with id 6eb4011f78ae045d59af74438cab59981227263fa2accee84d4a320d6f6de289 Mar 20 07:07:45 crc kubenswrapper[5136]: I0320 07:07:45.154584 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-d4w65" event={"ID":"b06e6b2d-fcba-4ba1-9ba1-82585032b382","Type":"ContainerStarted","Data":"de4f720443c1f8e7a14ba9cdddc4b7f519cf83df3494b597cdcd2945e3271fea"} Mar 20 07:07:45 crc kubenswrapper[5136]: I0320 07:07:45.154656 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-d4w65" event={"ID":"b06e6b2d-fcba-4ba1-9ba1-82585032b382","Type":"ContainerStarted","Data":"6eb4011f78ae045d59af74438cab59981227263fa2accee84d4a320d6f6de289"} Mar 20 07:07:45 crc kubenswrapper[5136]: I0320 07:07:45.173398 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-d4w65" podStartSLOduration=2.173371997 podStartE2EDuration="2.173371997s" podCreationTimestamp="2026-03-20 07:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:07:45.171791548 +0000 UTC m=+1097.431102759" watchObservedRunningTime="2026-03-20 07:07:45.173371997 +0000 UTC m=+1097.432683158" Mar 20 07:07:45 crc kubenswrapper[5136]: I0320 07:07:45.822065 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:07:45 crc kubenswrapper[5136]: I0320 07:07:45.822417 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.770117 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.772443 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.778327 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.778849 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-xnckp" Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.779584 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.797030 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.965989 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9587t\" (UniqueName: \"kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t\") pod \"openstack-operator-index-hlbhx\" (UID: \"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746\") " pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:07:56 crc kubenswrapper[5136]: I0320 07:07:56.066733 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9587t\" (UniqueName: \"kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t\") pod \"openstack-operator-index-hlbhx\" (UID: \"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746\") " pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:07:56 crc kubenswrapper[5136]: I0320 07:07:56.088490 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9587t\" (UniqueName: \"kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t\") pod \"openstack-operator-index-hlbhx\" (UID: \"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746\") " pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:07:56 crc kubenswrapper[5136]: I0320 07:07:56.122107 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:07:56 crc kubenswrapper[5136]: I0320 07:07:56.557387 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:07:56 crc kubenswrapper[5136]: W0320 07:07:56.557670 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b5c9582_34cd_4c36_9ed2_7d1ed6fbc746.slice/crio-e920f18bf97664feae1196e8d874ec99bff74c30b302796caebdbd276e026818 WatchSource:0}: Error finding container e920f18bf97664feae1196e8d874ec99bff74c30b302796caebdbd276e026818: Status 404 returned error can't find the container with id e920f18bf97664feae1196e8d874ec99bff74c30b302796caebdbd276e026818 Mar 20 07:07:57 crc kubenswrapper[5136]: I0320 07:07:57.253796 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hlbhx" event={"ID":"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746","Type":"ContainerStarted","Data":"e920f18bf97664feae1196e8d874ec99bff74c30b302796caebdbd276e026818"} Mar 20 07:07:58 crc kubenswrapper[5136]: I0320 07:07:58.265148 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hlbhx" event={"ID":"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746","Type":"ContainerStarted","Data":"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0"} Mar 20 07:07:58 crc kubenswrapper[5136]: I0320 07:07:58.287112 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hlbhx" podStartSLOduration=2.464778606 podStartE2EDuration="3.287084291s" podCreationTimestamp="2026-03-20 07:07:55 +0000 UTC" firstStartedPulling="2026-03-20 07:07:56.559861989 +0000 UTC m=+1108.819173150" lastFinishedPulling="2026-03-20 07:07:57.382167684 +0000 UTC m=+1109.641478835" observedRunningTime="2026-03-20 07:07:58.285490261 +0000 UTC m=+1110.544801432" watchObservedRunningTime="2026-03-20 07:07:58.287084291 +0000 UTC m=+1110.546395482" Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.136713 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.735612 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-w8k22"] Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.736513 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.743709 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7rx\" (UniqueName: \"kubernetes.io/projected/4c933e5d-73ac-4820-a31c-e1d5cc5bcae0-kube-api-access-nm7rx\") pod \"openstack-operator-index-w8k22\" (UID: \"4c933e5d-73ac-4820-a31c-e1d5cc5bcae0\") " pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.762457 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w8k22"] Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.845146 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm7rx\" (UniqueName: \"kubernetes.io/projected/4c933e5d-73ac-4820-a31c-e1d5cc5bcae0-kube-api-access-nm7rx\") pod \"openstack-operator-index-w8k22\" (UID: \"4c933e5d-73ac-4820-a31c-e1d5cc5bcae0\") " pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.884978 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm7rx\" (UniqueName: \"kubernetes.io/projected/4c933e5d-73ac-4820-a31c-e1d5cc5bcae0-kube-api-access-nm7rx\") pod \"openstack-operator-index-w8k22\" (UID: \"4c933e5d-73ac-4820-a31c-e1d5cc5bcae0\") " pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.061073 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.139661 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566508-v874c"] Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.140417 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.146495 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.146490 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.147113 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.154241 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqrzb\" (UniqueName: \"kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb\") pod \"auto-csr-approver-29566508-v874c\" (UID: \"f9cf4346-e624-476e-b04c-43b35e0a83cd\") " pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.162568 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566508-v874c"] Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.256508 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqrzb\" (UniqueName: \"kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb\") pod \"auto-csr-approver-29566508-v874c\" (UID: \"f9cf4346-e624-476e-b04c-43b35e0a83cd\") " pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.273565 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqrzb\" (UniqueName: \"kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb\") pod \"auto-csr-approver-29566508-v874c\" (UID: \"f9cf4346-e624-476e-b04c-43b35e0a83cd\") " pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.278109 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-hlbhx" podUID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" containerName="registry-server" containerID="cri-o://5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0" gracePeriod=2 Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.463536 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.530135 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w8k22"] Mar 20 07:08:00 crc kubenswrapper[5136]: W0320 07:08:00.539477 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c933e5d_73ac_4820_a31c_e1d5cc5bcae0.slice/crio-76ce34c505ebf194ed7fc6eda04ab8e3aa055d84f3be666cb76a2f116c0418fd WatchSource:0}: Error finding container 76ce34c505ebf194ed7fc6eda04ab8e3aa055d84f3be666cb76a2f116c0418fd: Status 404 returned error can't find the container with id 76ce34c505ebf194ed7fc6eda04ab8e3aa055d84f3be666cb76a2f116c0418fd Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.649954 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.661484 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9587t\" (UniqueName: \"kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t\") pod \"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746\" (UID: \"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746\") " Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.676889 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t" (OuterVolumeSpecName: "kube-api-access-9587t") pod "5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" (UID: "5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746"). InnerVolumeSpecName "kube-api-access-9587t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.763153 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9587t\" (UniqueName: \"kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t\") on node \"crc\" DevicePath \"\"" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.918910 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566508-v874c"] Mar 20 07:08:00 crc kubenswrapper[5136]: W0320 07:08:00.921911 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9cf4346_e624_476e_b04c_43b35e0a83cd.slice/crio-a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad WatchSource:0}: Error finding container a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad: Status 404 returned error can't find the container with id a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.284617 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w8k22" event={"ID":"4c933e5d-73ac-4820-a31c-e1d5cc5bcae0","Type":"ContainerStarted","Data":"15852bfb046cc7c5e9594a43cfd4671a58abbaf60500c8a588c4151fa7ac4ca8"} Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.285447 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w8k22" event={"ID":"4c933e5d-73ac-4820-a31c-e1d5cc5bcae0","Type":"ContainerStarted","Data":"76ce34c505ebf194ed7fc6eda04ab8e3aa055d84f3be666cb76a2f116c0418fd"} Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.286911 5136 generic.go:334] "Generic (PLEG): container finished" podID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" containerID="5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0" exitCode=0 Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.286974 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hlbhx" event={"ID":"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746","Type":"ContainerDied","Data":"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0"} Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.287000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hlbhx" event={"ID":"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746","Type":"ContainerDied","Data":"e920f18bf97664feae1196e8d874ec99bff74c30b302796caebdbd276e026818"} Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.287026 5136 scope.go:117] "RemoveContainer" containerID="5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0" Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.287207 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.294740 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566508-v874c" event={"ID":"f9cf4346-e624-476e-b04c-43b35e0a83cd","Type":"ContainerStarted","Data":"a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad"} Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.312577 5136 scope.go:117] "RemoveContainer" containerID="5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0" Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.312844 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-w8k22" podStartSLOduration=1.8229884790000002 podStartE2EDuration="2.31280247s" podCreationTimestamp="2026-03-20 07:07:59 +0000 UTC" firstStartedPulling="2026-03-20 07:08:00.54513067 +0000 UTC m=+1112.804441831" lastFinishedPulling="2026-03-20 07:08:01.034944631 +0000 UTC m=+1113.294255822" observedRunningTime="2026-03-20 07:08:01.303066499 +0000 UTC m=+1113.562377680" watchObservedRunningTime="2026-03-20 07:08:01.31280247 +0000 UTC m=+1113.572113641" Mar 20 07:08:01 crc kubenswrapper[5136]: E0320 07:08:01.313563 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0\": container with ID starting with 5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0 not found: ID does not exist" containerID="5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0" Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.313609 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0"} err="failed to get container status \"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0\": rpc error: code = NotFound desc = could not find container \"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0\": container with ID starting with 5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0 not found: ID does not exist" Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.332677 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.338076 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:08:02 crc kubenswrapper[5136]: I0320 07:08:02.302454 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566508-v874c" event={"ID":"f9cf4346-e624-476e-b04c-43b35e0a83cd","Type":"ContainerStarted","Data":"dd0acbcfa54abd26f2307cdf5e361341926b1d9e084af4898d114e06b8c54d72"} Mar 20 07:08:02 crc kubenswrapper[5136]: I0320 07:08:02.315692 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566508-v874c" podStartSLOduration=1.230053305 podStartE2EDuration="2.31567662s" podCreationTimestamp="2026-03-20 07:08:00 +0000 UTC" firstStartedPulling="2026-03-20 07:08:00.925212094 +0000 UTC m=+1113.184523275" lastFinishedPulling="2026-03-20 07:08:02.010835439 +0000 UTC m=+1114.270146590" observedRunningTime="2026-03-20 07:08:02.313230225 +0000 UTC m=+1114.572541376" watchObservedRunningTime="2026-03-20 07:08:02.31567662 +0000 UTC m=+1114.574987771" Mar 20 07:08:02 crc kubenswrapper[5136]: I0320 07:08:02.405758 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" path="/var/lib/kubelet/pods/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746/volumes" Mar 20 07:08:03 crc kubenswrapper[5136]: I0320 07:08:03.308475 5136 generic.go:334] "Generic (PLEG): container finished" podID="f9cf4346-e624-476e-b04c-43b35e0a83cd" containerID="dd0acbcfa54abd26f2307cdf5e361341926b1d9e084af4898d114e06b8c54d72" exitCode=0 Mar 20 07:08:03 crc kubenswrapper[5136]: I0320 07:08:03.308527 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566508-v874c" event={"ID":"f9cf4346-e624-476e-b04c-43b35e0a83cd","Type":"ContainerDied","Data":"dd0acbcfa54abd26f2307cdf5e361341926b1d9e084af4898d114e06b8c54d72"} Mar 20 07:08:04 crc kubenswrapper[5136]: I0320 07:08:04.530862 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:04 crc kubenswrapper[5136]: I0320 07:08:04.712608 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqrzb\" (UniqueName: \"kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb\") pod \"f9cf4346-e624-476e-b04c-43b35e0a83cd\" (UID: \"f9cf4346-e624-476e-b04c-43b35e0a83cd\") " Mar 20 07:08:04 crc kubenswrapper[5136]: I0320 07:08:04.718358 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb" (OuterVolumeSpecName: "kube-api-access-zqrzb") pod "f9cf4346-e624-476e-b04c-43b35e0a83cd" (UID: "f9cf4346-e624-476e-b04c-43b35e0a83cd"). InnerVolumeSpecName "kube-api-access-zqrzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:08:04 crc kubenswrapper[5136]: I0320 07:08:04.814082 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqrzb\" (UniqueName: \"kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb\") on node \"crc\" DevicePath \"\"" Mar 20 07:08:05 crc kubenswrapper[5136]: I0320 07:08:05.322032 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566508-v874c" event={"ID":"f9cf4346-e624-476e-b04c-43b35e0a83cd","Type":"ContainerDied","Data":"a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad"} Mar 20 07:08:05 crc kubenswrapper[5136]: I0320 07:08:05.322071 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad" Mar 20 07:08:05 crc kubenswrapper[5136]: I0320 07:08:05.322143 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:05 crc kubenswrapper[5136]: I0320 07:08:05.379006 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566502-5gzjz"] Mar 20 07:08:05 crc kubenswrapper[5136]: I0320 07:08:05.385418 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566502-5gzjz"] Mar 20 07:08:06 crc kubenswrapper[5136]: I0320 07:08:06.404802 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7239b4f-11f6-4f5c-8d78-c233e33b8a79" path="/var/lib/kubelet/pods/a7239b4f-11f6-4f5c-8d78-c233e33b8a79/volumes" Mar 20 07:08:10 crc kubenswrapper[5136]: I0320 07:08:10.061456 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:10 crc kubenswrapper[5136]: I0320 07:08:10.061724 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:10 crc kubenswrapper[5136]: I0320 07:08:10.088207 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:10 crc kubenswrapper[5136]: I0320 07:08:10.380341 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:15 crc kubenswrapper[5136]: I0320 07:08:15.822432 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:08:15 crc kubenswrapper[5136]: I0320 07:08:15.824053 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.975236 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz"] Mar 20 07:08:16 crc kubenswrapper[5136]: E0320 07:08:16.975519 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" containerName="registry-server" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.975534 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" containerName="registry-server" Mar 20 07:08:16 crc kubenswrapper[5136]: E0320 07:08:16.975550 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cf4346-e624-476e-b04c-43b35e0a83cd" containerName="oc" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.975558 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cf4346-e624-476e-b04c-43b35e0a83cd" containerName="oc" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.975701 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9cf4346-e624-476e-b04c-43b35e0a83cd" containerName="oc" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.975717 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" containerName="registry-server" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.976656 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.978829 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nm92r" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.989506 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz"] Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.081387 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch5wz\" (UniqueName: \"kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.081480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.081506 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.183182 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.183240 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.183300 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch5wz\" (UniqueName: \"kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.183724 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.183958 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.209862 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch5wz\" (UniqueName: \"kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.294545 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.713982 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz"] Mar 20 07:08:18 crc kubenswrapper[5136]: I0320 07:08:18.419118 5136 generic.go:334] "Generic (PLEG): container finished" podID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerID="be947799277d738c82c9a3ce13ea1c74b6510ee0bdc70c2f09934722f7f1c708" exitCode=0 Mar 20 07:08:18 crc kubenswrapper[5136]: I0320 07:08:18.419465 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" event={"ID":"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7","Type":"ContainerDied","Data":"be947799277d738c82c9a3ce13ea1c74b6510ee0bdc70c2f09934722f7f1c708"} Mar 20 07:08:18 crc kubenswrapper[5136]: I0320 07:08:18.420341 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" event={"ID":"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7","Type":"ContainerStarted","Data":"1aa8cf30f16c68898a28fa61c71e7a50e7e5107abfa611b214d418fe7fdbe7f3"} Mar 20 07:08:19 crc kubenswrapper[5136]: I0320 07:08:19.428115 5136 generic.go:334] "Generic (PLEG): container finished" podID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerID="48752fb79ecfd378cd8c2169459d2c3bfa0b4e11636bc18a30a2688ca61ee6dd" exitCode=0 Mar 20 07:08:19 crc kubenswrapper[5136]: I0320 07:08:19.428284 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" event={"ID":"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7","Type":"ContainerDied","Data":"48752fb79ecfd378cd8c2169459d2c3bfa0b4e11636bc18a30a2688ca61ee6dd"} Mar 20 07:08:20 crc kubenswrapper[5136]: I0320 07:08:20.435093 5136 generic.go:334] "Generic (PLEG): container finished" podID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerID="8e105ac4c123f8aa7db72a80a061a758c4c1bc2c44e0fcc4b2aaadb3a0ef3800" exitCode=0 Mar 20 07:08:20 crc kubenswrapper[5136]: I0320 07:08:20.435132 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" event={"ID":"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7","Type":"ContainerDied","Data":"8e105ac4c123f8aa7db72a80a061a758c4c1bc2c44e0fcc4b2aaadb3a0ef3800"} Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.710879 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.745125 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch5wz\" (UniqueName: \"kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz\") pod \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.745195 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle\") pod \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.745265 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util\") pod \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.746033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle" (OuterVolumeSpecName: "bundle") pod "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" (UID: "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.762052 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util" (OuterVolumeSpecName: "util") pod "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" (UID: "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.762285 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz" (OuterVolumeSpecName: "kube-api-access-ch5wz") pod "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" (UID: "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7"). InnerVolumeSpecName "kube-api-access-ch5wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.846571 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch5wz\" (UniqueName: \"kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz\") on node \"crc\" DevicePath \"\"" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.846604 5136 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.846614 5136 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util\") on node \"crc\" DevicePath \"\"" Mar 20 07:08:22 crc kubenswrapper[5136]: I0320 07:08:22.452868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" event={"ID":"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7","Type":"ContainerDied","Data":"1aa8cf30f16c68898a28fa61c71e7a50e7e5107abfa611b214d418fe7fdbe7f3"} Mar 20 07:08:22 crc kubenswrapper[5136]: I0320 07:08:22.452904 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aa8cf30f16c68898a28fa61c71e7a50e7e5107abfa611b214d418fe7fdbe7f3" Mar 20 07:08:22 crc kubenswrapper[5136]: I0320 07:08:22.452966 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.014477 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc"] Mar 20 07:08:29 crc kubenswrapper[5136]: E0320 07:08:29.015660 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="extract" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.015682 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="extract" Mar 20 07:08:29 crc kubenswrapper[5136]: E0320 07:08:29.015701 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="pull" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.015713 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="pull" Mar 20 07:08:29 crc kubenswrapper[5136]: E0320 07:08:29.015739 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="util" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.015750 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="util" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.015983 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="extract" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.016531 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.020733 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-57h46" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.039733 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc"] Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.136392 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln82h\" (UniqueName: \"kubernetes.io/projected/eb51f1ec-5289-4291-8334-0149c355adac-kube-api-access-ln82h\") pod \"openstack-operator-controller-init-b85c4d696-xv6qc\" (UID: \"eb51f1ec-5289-4291-8334-0149c355adac\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.219090 5136 scope.go:117] "RemoveContainer" containerID="ec7cb3f6c1f148e1e156127b9c7522e3ead66e4d7bc579e04406415291cd9efb" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.238205 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln82h\" (UniqueName: \"kubernetes.io/projected/eb51f1ec-5289-4291-8334-0149c355adac-kube-api-access-ln82h\") pod \"openstack-operator-controller-init-b85c4d696-xv6qc\" (UID: \"eb51f1ec-5289-4291-8334-0149c355adac\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.266569 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln82h\" (UniqueName: \"kubernetes.io/projected/eb51f1ec-5289-4291-8334-0149c355adac-kube-api-access-ln82h\") pod \"openstack-operator-controller-init-b85c4d696-xv6qc\" (UID: \"eb51f1ec-5289-4291-8334-0149c355adac\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.336719 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.735894 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc"] Mar 20 07:08:30 crc kubenswrapper[5136]: I0320 07:08:30.511843 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" event={"ID":"eb51f1ec-5289-4291-8334-0149c355adac","Type":"ContainerStarted","Data":"afe31b1604b676a82af0e66b45b5d805506271dd9ca6e14b7e97d51f34ce6ed2"} Mar 20 07:08:34 crc kubenswrapper[5136]: I0320 07:08:34.539757 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" event={"ID":"eb51f1ec-5289-4291-8334-0149c355adac","Type":"ContainerStarted","Data":"af77bcd1ac1563ca59462c1ee12868372a65be7aafdf02f4d45b128401820154"} Mar 20 07:08:34 crc kubenswrapper[5136]: I0320 07:08:34.541007 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:39 crc kubenswrapper[5136]: I0320 07:08:39.340222 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:39 crc kubenswrapper[5136]: I0320 07:08:39.388063 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" podStartSLOduration=7.676016652 podStartE2EDuration="11.388039969s" podCreationTimestamp="2026-03-20 07:08:28 +0000 UTC" firstStartedPulling="2026-03-20 07:08:29.74401522 +0000 UTC m=+1142.003326371" lastFinishedPulling="2026-03-20 07:08:33.456038537 +0000 UTC m=+1145.715349688" observedRunningTime="2026-03-20 07:08:34.573595048 +0000 UTC m=+1146.832906189" watchObservedRunningTime="2026-03-20 07:08:39.388039969 +0000 UTC m=+1151.647351130" Mar 20 07:08:45 crc kubenswrapper[5136]: I0320 07:08:45.822254 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:08:45 crc kubenswrapper[5136]: I0320 07:08:45.822763 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:08:45 crc kubenswrapper[5136]: I0320 07:08:45.822885 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:08:45 crc kubenswrapper[5136]: I0320 07:08:45.823987 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:08:45 crc kubenswrapper[5136]: I0320 07:08:45.824127 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f" gracePeriod=600 Mar 20 07:08:46 crc kubenswrapper[5136]: I0320 07:08:46.635152 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f" exitCode=0 Mar 20 07:08:46 crc kubenswrapper[5136]: I0320 07:08:46.635262 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f"} Mar 20 07:08:46 crc kubenswrapper[5136]: I0320 07:08:46.635880 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f"} Mar 20 07:08:46 crc kubenswrapper[5136]: I0320 07:08:46.635910 5136 scope.go:117] "RemoveContainer" containerID="efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.787048 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.788343 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.789688 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-n765f" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.799577 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.811095 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.811914 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.813451 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qsfdb" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.818292 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.819260 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.832273 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tfqrt" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.845267 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.863355 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.878018 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.879000 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.880675 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ldctn" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.905264 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.920624 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.921438 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.923330 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-k68ck" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.925174 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.932689 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.933493 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.939160 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tfmmz" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.940500 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.942289 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.943417 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.946118 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4g7qk" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.946827 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgk5j\" (UniqueName: \"kubernetes.io/projected/95dfc6ea-897c-4133-ab1e-cefc81ab0623-kube-api-access-sgk5j\") pod \"cinder-operator-controller-manager-8d58dc466-g62fh\" (UID: \"95dfc6ea-897c-4133-ab1e-cefc81ab0623\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.946865 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r999q\" (UniqueName: \"kubernetes.io/projected/0454e048-0e5f-454d-a341-627512f745b9-kube-api-access-r999q\") pod \"designate-operator-controller-manager-588d4d986b-nzs5m\" (UID: \"0454e048-0e5f-454d-a341-627512f745b9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.947385 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvh4t\" (UniqueName: \"kubernetes.io/projected/86f2c200-3fc8-4ff8-abbd-4e9196951c84-kube-api-access-gvh4t\") pod \"barbican-operator-controller-manager-59bc569d95-5lz5s\" (UID: \"86f2c200-3fc8-4ff8-abbd-4e9196951c84\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.949118 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.973511 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.982174 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7crl6" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.986079 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.022101 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.026990 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.036311 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.036448 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnhqc\" (UniqueName: \"kubernetes.io/projected/8035ac49-bf5e-4c7a-801a-2e0a9acdbec8-kube-api-access-lnhqc\") pod \"ironic-operator-controller-manager-6f787dddc9-cvwqk\" (UID: \"8035ac49-bf5e-4c7a-801a-2e0a9acdbec8\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048252 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-947qm\" (UniqueName: \"kubernetes.io/projected/d9bea0a5-4e0c-4eec-8c57-465238459ec5-kube-api-access-947qm\") pod \"glance-operator-controller-manager-79df6bcc97-4zc57\" (UID: \"d9bea0a5-4e0c-4eec-8c57-465238459ec5\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048288 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx4t2\" (UniqueName: \"kubernetes.io/projected/ce8f650c-1729-4d5d-ae70-6cefed6ebe33-kube-api-access-dx4t2\") pod \"horizon-operator-controller-manager-8464cc45fb-jqkmw\" (UID: \"ce8f650c-1729-4d5d-ae70-6cefed6ebe33\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvh4t\" (UniqueName: \"kubernetes.io/projected/86f2c200-3fc8-4ff8-abbd-4e9196951c84-kube-api-access-gvh4t\") pod \"barbican-operator-controller-manager-59bc569d95-5lz5s\" (UID: \"86f2c200-3fc8-4ff8-abbd-4e9196951c84\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048366 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgk5j\" (UniqueName: \"kubernetes.io/projected/95dfc6ea-897c-4133-ab1e-cefc81ab0623-kube-api-access-sgk5j\") pod \"cinder-operator-controller-manager-8d58dc466-g62fh\" (UID: \"95dfc6ea-897c-4133-ab1e-cefc81ab0623\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048384 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r999q\" (UniqueName: \"kubernetes.io/projected/0454e048-0e5f-454d-a341-627512f745b9-kube-api-access-r999q\") pod \"designate-operator-controller-manager-588d4d986b-nzs5m\" (UID: \"0454e048-0e5f-454d-a341-627512f745b9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048420 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8wb\" (UniqueName: \"kubernetes.io/projected/98ee6d09-7d19-49ff-af63-3f24c4bbf6de-kube-api-access-rb8wb\") pod \"heat-operator-controller-manager-67dd5f86f5-j7rd5\" (UID: \"98ee6d09-7d19-49ff-af63-3f24c4bbf6de\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.050448 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-cv5dm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.054613 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.065539 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.066300 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.069066 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-p75jn" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.085600 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvh4t\" (UniqueName: \"kubernetes.io/projected/86f2c200-3fc8-4ff8-abbd-4e9196951c84-kube-api-access-gvh4t\") pod \"barbican-operator-controller-manager-59bc569d95-5lz5s\" (UID: \"86f2c200-3fc8-4ff8-abbd-4e9196951c84\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.089099 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r999q\" (UniqueName: \"kubernetes.io/projected/0454e048-0e5f-454d-a341-627512f745b9-kube-api-access-r999q\") pod \"designate-operator-controller-manager-588d4d986b-nzs5m\" (UID: \"0454e048-0e5f-454d-a341-627512f745b9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.097425 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgk5j\" (UniqueName: \"kubernetes.io/projected/95dfc6ea-897c-4133-ab1e-cefc81ab0623-kube-api-access-sgk5j\") pod \"cinder-operator-controller-manager-8d58dc466-g62fh\" (UID: \"95dfc6ea-897c-4133-ab1e-cefc81ab0623\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.102170 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.103171 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.106660 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6wwtg" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.111000 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.119556 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.132289 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.133394 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.138616 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-8g592"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.139379 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.141954 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-fb4jj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.148564 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149114 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drd4c\" (UniqueName: \"kubernetes.io/projected/86ae10c6-6dff-4cac-a399-e03bd4de7134-kube-api-access-drd4c\") pod \"keystone-operator-controller-manager-768b96df4c-9vwxq\" (UID: \"86ae10c6-6dff-4cac-a399-e03bd4de7134\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149159 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb8wb\" (UniqueName: \"kubernetes.io/projected/98ee6d09-7d19-49ff-af63-3f24c4bbf6de-kube-api-access-rb8wb\") pod \"heat-operator-controller-manager-67dd5f86f5-j7rd5\" (UID: \"98ee6d09-7d19-49ff-af63-3f24c4bbf6de\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149240 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnhqc\" (UniqueName: \"kubernetes.io/projected/8035ac49-bf5e-4c7a-801a-2e0a9acdbec8-kube-api-access-lnhqc\") pod \"ironic-operator-controller-manager-6f787dddc9-cvwqk\" (UID: \"8035ac49-bf5e-4c7a-801a-2e0a9acdbec8\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149260 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jcbh\" (UniqueName: \"kubernetes.io/projected/fad403b0-ff16-4bfe-a0e3-8f0da431260b-kube-api-access-7jcbh\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149302 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-947qm\" (UniqueName: \"kubernetes.io/projected/d9bea0a5-4e0c-4eec-8c57-465238459ec5-kube-api-access-947qm\") pod \"glance-operator-controller-manager-79df6bcc97-4zc57\" (UID: \"d9bea0a5-4e0c-4eec-8c57-465238459ec5\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.150010 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx4t2\" (UniqueName: \"kubernetes.io/projected/ce8f650c-1729-4d5d-ae70-6cefed6ebe33-kube-api-access-dx4t2\") pod \"horizon-operator-controller-manager-8464cc45fb-jqkmw\" (UID: \"ce8f650c-1729-4d5d-ae70-6cefed6ebe33\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.156935 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-8g592"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.162662 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.163451 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.166590 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnhqc\" (UniqueName: \"kubernetes.io/projected/8035ac49-bf5e-4c7a-801a-2e0a9acdbec8-kube-api-access-lnhqc\") pod \"ironic-operator-controller-manager-6f787dddc9-cvwqk\" (UID: \"8035ac49-bf5e-4c7a-801a-2e0a9acdbec8\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.166891 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mhnst" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.167680 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-947qm\" (UniqueName: \"kubernetes.io/projected/d9bea0a5-4e0c-4eec-8c57-465238459ec5-kube-api-access-947qm\") pod \"glance-operator-controller-manager-79df6bcc97-4zc57\" (UID: \"d9bea0a5-4e0c-4eec-8c57-465238459ec5\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.170268 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb8wb\" (UniqueName: \"kubernetes.io/projected/98ee6d09-7d19-49ff-af63-3f24c4bbf6de-kube-api-access-rb8wb\") pod \"heat-operator-controller-manager-67dd5f86f5-j7rd5\" (UID: \"98ee6d09-7d19-49ff-af63-3f24c4bbf6de\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.171660 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx4t2\" (UniqueName: \"kubernetes.io/projected/ce8f650c-1729-4d5d-ae70-6cefed6ebe33-kube-api-access-dx4t2\") pod \"horizon-operator-controller-manager-8464cc45fb-jqkmw\" (UID: \"ce8f650c-1729-4d5d-ae70-6cefed6ebe33\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.180473 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.181383 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.184753 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.187035 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-wpbj4" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.193884 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.201074 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.207047 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.211441 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.214418 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-d2gj7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.215100 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.221281 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-58pk7"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.222346 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.225616 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7dbrt" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.229887 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.230604 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.232880 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5frbx" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.243336 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.244861 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251388 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drd4c\" (UniqueName: \"kubernetes.io/projected/86ae10c6-6dff-4cac-a399-e03bd4de7134-kube-api-access-drd4c\") pod \"keystone-operator-controller-manager-768b96df4c-9vwxq\" (UID: \"86ae10c6-6dff-4cac-a399-e03bd4de7134\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251472 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmfmm\" (UniqueName: \"kubernetes.io/projected/527edb93-1d3a-45f7-a7c9-f9e28fb6f713-kube-api-access-zmfmm\") pod \"placement-operator-controller-manager-5784578c99-58pk7\" (UID: \"527edb93-1d3a-45f7-a7c9-f9e28fb6f713\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251494 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh4bw\" (UniqueName: \"kubernetes.io/projected/10cd2a26-beca-4a3b-a791-83cc8cc451ab-kube-api-access-qh4bw\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251515 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251532 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwhrd\" (UniqueName: \"kubernetes.io/projected/2f2fc86c-b42c-4fd9-94e6-817ed073035d-kube-api-access-kwhrd\") pod \"neutron-operator-controller-manager-767865f676-8g592\" (UID: \"2f2fc86c-b42c-4fd9-94e6-817ed073035d\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251553 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jcbh\" (UniqueName: \"kubernetes.io/projected/fad403b0-ff16-4bfe-a0e3-8f0da431260b-kube-api-access-7jcbh\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251583 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6m2l\" (UniqueName: \"kubernetes.io/projected/9b7da04b-f73c-4838-978d-34e4665f3963-kube-api-access-v6m2l\") pod \"nova-operator-controller-manager-5d488d59fb-rdkrz\" (UID: \"9b7da04b-f73c-4838-978d-34e4665f3963\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251604 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dxjm\" (UniqueName: \"kubernetes.io/projected/e85f51ac-f1e1-4299-91a6-9b27dcc50967-kube-api-access-8dxjm\") pod \"octavia-operator-controller-manager-5b9f45d989-sshvb\" (UID: \"e85f51ac-f1e1-4299-91a6-9b27dcc50967\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251620 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ld55\" (UniqueName: \"kubernetes.io/projected/84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6-kube-api-access-5ld55\") pod \"mariadb-operator-controller-manager-67ccfc9778-w497x\" (UID: \"84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251637 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqjm\" (UniqueName: \"kubernetes.io/projected/67cd41a3-e91f-4d51-b79a-61d697bbf646-kube-api-access-hvqjm\") pod \"ovn-operator-controller-manager-884679f54-pdmtp\" (UID: \"67cd41a3-e91f-4d51-b79a-61d697bbf646\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251657 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwm8\" (UniqueName: \"kubernetes.io/projected/0688d3df-a125-4d57-9699-a87d92b140fa-kube-api-access-4pwm8\") pod \"manila-operator-controller-manager-55f864c847-wz6kw\" (UID: \"0688d3df-a125-4d57-9699-a87d92b140fa\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.251853 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.251911 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:18.751880281 +0000 UTC m=+1191.011191432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.254326 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.262790 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-58pk7"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.280750 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.281959 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.284550 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b994r" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.291199 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.303358 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.320169 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.321143 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drd4c\" (UniqueName: \"kubernetes.io/projected/86ae10c6-6dff-4cac-a399-e03bd4de7134-kube-api-access-drd4c\") pod \"keystone-operator-controller-manager-768b96df4c-9vwxq\" (UID: \"86ae10c6-6dff-4cac-a399-e03bd4de7134\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.321255 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jcbh\" (UniqueName: \"kubernetes.io/projected/fad403b0-ff16-4bfe-a0e3-8f0da431260b-kube-api-access-7jcbh\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352558 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmfmm\" (UniqueName: \"kubernetes.io/projected/527edb93-1d3a-45f7-a7c9-f9e28fb6f713-kube-api-access-zmfmm\") pod \"placement-operator-controller-manager-5784578c99-58pk7\" (UID: \"527edb93-1d3a-45f7-a7c9-f9e28fb6f713\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352615 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh4bw\" (UniqueName: \"kubernetes.io/projected/10cd2a26-beca-4a3b-a791-83cc8cc451ab-kube-api-access-qh4bw\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352651 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352679 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwhrd\" (UniqueName: \"kubernetes.io/projected/2f2fc86c-b42c-4fd9-94e6-817ed073035d-kube-api-access-kwhrd\") pod \"neutron-operator-controller-manager-767865f676-8g592\" (UID: \"2f2fc86c-b42c-4fd9-94e6-817ed073035d\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352722 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6m2l\" (UniqueName: \"kubernetes.io/projected/9b7da04b-f73c-4838-978d-34e4665f3963-kube-api-access-v6m2l\") pod \"nova-operator-controller-manager-5d488d59fb-rdkrz\" (UID: \"9b7da04b-f73c-4838-978d-34e4665f3963\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352752 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dxjm\" (UniqueName: \"kubernetes.io/projected/e85f51ac-f1e1-4299-91a6-9b27dcc50967-kube-api-access-8dxjm\") pod \"octavia-operator-controller-manager-5b9f45d989-sshvb\" (UID: \"e85f51ac-f1e1-4299-91a6-9b27dcc50967\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352774 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ld55\" (UniqueName: \"kubernetes.io/projected/84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6-kube-api-access-5ld55\") pod \"mariadb-operator-controller-manager-67ccfc9778-w497x\" (UID: \"84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352799 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvqjm\" (UniqueName: \"kubernetes.io/projected/67cd41a3-e91f-4d51-b79a-61d697bbf646-kube-api-access-hvqjm\") pod \"ovn-operator-controller-manager-884679f54-pdmtp\" (UID: \"67cd41a3-e91f-4d51-b79a-61d697bbf646\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pwm8\" (UniqueName: \"kubernetes.io/projected/0688d3df-a125-4d57-9699-a87d92b140fa-kube-api-access-4pwm8\") pod \"manila-operator-controller-manager-55f864c847-wz6kw\" (UID: \"0688d3df-a125-4d57-9699-a87d92b140fa\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.353852 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.353876 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr"] Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.353910 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:18.853886897 +0000 UTC m=+1191.113198048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.354756 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.363785 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.366832 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.384143 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jf8dj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.398663 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh4bw\" (UniqueName: \"kubernetes.io/projected/10cd2a26-beca-4a3b-a791-83cc8cc451ab-kube-api-access-qh4bw\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.400344 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmfmm\" (UniqueName: \"kubernetes.io/projected/527edb93-1d3a-45f7-a7c9-f9e28fb6f713-kube-api-access-zmfmm\") pod \"placement-operator-controller-manager-5784578c99-58pk7\" (UID: \"527edb93-1d3a-45f7-a7c9-f9e28fb6f713\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.400789 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pwm8\" (UniqueName: \"kubernetes.io/projected/0688d3df-a125-4d57-9699-a87d92b140fa-kube-api-access-4pwm8\") pod \"manila-operator-controller-manager-55f864c847-wz6kw\" (UID: \"0688d3df-a125-4d57-9699-a87d92b140fa\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.408297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvqjm\" (UniqueName: \"kubernetes.io/projected/67cd41a3-e91f-4d51-b79a-61d697bbf646-kube-api-access-hvqjm\") pod \"ovn-operator-controller-manager-884679f54-pdmtp\" (UID: \"67cd41a3-e91f-4d51-b79a-61d697bbf646\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.455734 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7lx8\" (UniqueName: \"kubernetes.io/projected/489b4c0d-9288-4e00-84ac-23fb05767840-kube-api-access-k7lx8\") pod \"telemetry-operator-controller-manager-d6b694c5-qwtfr\" (UID: \"489b4c0d-9288-4e00-84ac-23fb05767840\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.456618 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhnb\" (UniqueName: \"kubernetes.io/projected/8129ebe9-8537-403e-9c32-835f54b5d878-kube-api-access-kkhnb\") pod \"swift-operator-controller-manager-c674c5965-jmsnc\" (UID: \"8129ebe9-8537-403e-9c32-835f54b5d878\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.408757 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ld55\" (UniqueName: \"kubernetes.io/projected/84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6-kube-api-access-5ld55\") pod \"mariadb-operator-controller-manager-67ccfc9778-w497x\" (UID: \"84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.458756 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dxjm\" (UniqueName: \"kubernetes.io/projected/e85f51ac-f1e1-4299-91a6-9b27dcc50967-kube-api-access-8dxjm\") pod \"octavia-operator-controller-manager-5b9f45d989-sshvb\" (UID: \"e85f51ac-f1e1-4299-91a6-9b27dcc50967\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.459603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6m2l\" (UniqueName: \"kubernetes.io/projected/9b7da04b-f73c-4838-978d-34e4665f3963-kube-api-access-v6m2l\") pod \"nova-operator-controller-manager-5d488d59fb-rdkrz\" (UID: \"9b7da04b-f73c-4838-978d-34e4665f3963\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.461503 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwhrd\" (UniqueName: \"kubernetes.io/projected/2f2fc86c-b42c-4fd9-94e6-817ed073035d-kube-api-access-kwhrd\") pod \"neutron-operator-controller-manager-767865f676-8g592\" (UID: \"2f2fc86c-b42c-4fd9-94e6-817ed073035d\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.465960 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.496273 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.548321 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.549841 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.563906 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkhnb\" (UniqueName: \"kubernetes.io/projected/8129ebe9-8537-403e-9c32-835f54b5d878-kube-api-access-kkhnb\") pod \"swift-operator-controller-manager-c674c5965-jmsnc\" (UID: \"8129ebe9-8537-403e-9c32-835f54b5d878\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.564049 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7lx8\" (UniqueName: \"kubernetes.io/projected/489b4c0d-9288-4e00-84ac-23fb05767840-kube-api-access-k7lx8\") pod \"telemetry-operator-controller-manager-d6b694c5-qwtfr\" (UID: \"489b4c0d-9288-4e00-84ac-23fb05767840\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.599485 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.602375 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.606241 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7lx8\" (UniqueName: \"kubernetes.io/projected/489b4c0d-9288-4e00-84ac-23fb05767840-kube-api-access-k7lx8\") pod \"telemetry-operator-controller-manager-d6b694c5-qwtfr\" (UID: \"489b4c0d-9288-4e00-84ac-23fb05767840\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.606311 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkhnb\" (UniqueName: \"kubernetes.io/projected/8129ebe9-8537-403e-9c32-835f54b5d878-kube-api-access-kkhnb\") pod \"swift-operator-controller-manager-c674c5965-jmsnc\" (UID: \"8129ebe9-8537-403e-9c32-835f54b5d878\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.619538 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.622235 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.630024 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.632435 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.637360 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mcxc5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.635807 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.647625 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.664853 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.665635 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.674965 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9669l" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.690048 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.705155 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.706315 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.711735 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cldn9" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.712355 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.713660 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.714059 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.730943 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.735649 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.736523 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.738592 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bnzr2" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.741302 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.768142 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpztp\" (UniqueName: \"kubernetes.io/projected/547cee69-3d64-49aa-8e95-c19be2bb3089-kube-api-access-wpztp\") pod \"test-operator-controller-manager-5c5cb9c4d7-v4npm\" (UID: \"547cee69-3d64-49aa-8e95-c19be2bb3089\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.768207 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.768337 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qscpw\" (UniqueName: \"kubernetes.io/projected/f50bceb5-4fe7-4eba-a9a2-e40f6c89583a-kube-api-access-qscpw\") pod \"watcher-operator-controller-manager-6c4d75f7f9-xp6jw\" (UID: \"f50bceb5-4fe7-4eba-a9a2-e40f6c89583a\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.769394 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.769449 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:19.769430596 +0000 UTC m=+1192.028741827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.797680 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869504 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869552 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869595 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9td\" (UniqueName: \"kubernetes.io/projected/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-kube-api-access-zh9td\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869633 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpztp\" (UniqueName: \"kubernetes.io/projected/547cee69-3d64-49aa-8e95-c19be2bb3089-kube-api-access-wpztp\") pod \"test-operator-controller-manager-5c5cb9c4d7-v4npm\" (UID: \"547cee69-3d64-49aa-8e95-c19be2bb3089\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869783 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qscpw\" (UniqueName: \"kubernetes.io/projected/f50bceb5-4fe7-4eba-a9a2-e40f6c89583a-kube-api-access-qscpw\") pod \"watcher-operator-controller-manager-6c4d75f7f9-xp6jw\" (UID: \"f50bceb5-4fe7-4eba-a9a2-e40f6c89583a\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869894 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.870040 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.870105 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:19.870088591 +0000 UTC m=+1192.129399812 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.870135 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gx6\" (UniqueName: \"kubernetes.io/projected/3dcb58f9-ad42-41ad-af27-2ca462257e77-kube-api-access-w6gx6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vlngd\" (UID: \"3dcb58f9-ad42-41ad-af27-2ca462257e77\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.894598 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qscpw\" (UniqueName: \"kubernetes.io/projected/f50bceb5-4fe7-4eba-a9a2-e40f6c89583a-kube-api-access-qscpw\") pod \"watcher-operator-controller-manager-6c4d75f7f9-xp6jw\" (UID: \"f50bceb5-4fe7-4eba-a9a2-e40f6c89583a\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.898622 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpztp\" (UniqueName: \"kubernetes.io/projected/547cee69-3d64-49aa-8e95-c19be2bb3089-kube-api-access-wpztp\") pod \"test-operator-controller-manager-5c5cb9c4d7-v4npm\" (UID: \"547cee69-3d64-49aa-8e95-c19be2bb3089\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.971427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6gx6\" (UniqueName: \"kubernetes.io/projected/3dcb58f9-ad42-41ad-af27-2ca462257e77-kube-api-access-w6gx6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vlngd\" (UID: \"3dcb58f9-ad42-41ad-af27-2ca462257e77\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.971514 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.971557 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.971601 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh9td\" (UniqueName: \"kubernetes.io/projected/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-kube-api-access-zh9td\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.971911 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.972049 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.972112 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:19.472090365 +0000 UTC m=+1191.731401716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.972576 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:19.47255772 +0000 UTC m=+1191.731868961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.982389 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.986932 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.001083 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh9td\" (UniqueName: \"kubernetes.io/projected/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-kube-api-access-zh9td\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.002161 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6gx6\" (UniqueName: \"kubernetes.io/projected/3dcb58f9-ad42-41ad-af27-2ca462257e77-kube-api-access-w6gx6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vlngd\" (UID: \"3dcb58f9-ad42-41ad-af27-2ca462257e77\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.021620 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" event={"ID":"95dfc6ea-897c-4133-ab1e-cefc81ab0623","Type":"ContainerStarted","Data":"6d52c7ec9e589fb6228568a2afbe2ccd7a67000a56c9b462d764d82a456bd86f"} Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.022555 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" event={"ID":"86f2c200-3fc8-4ff8-abbd-4e9196951c84","Type":"ContainerStarted","Data":"f46dd0bab3311676d732b89038e462558d3bddc571f95fe1b7f8e1efc179db88"} Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.036741 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9bea0a5_4e0c_4eec_8c57_465238459ec5.slice/crio-ab8515f21f6116d7dd6124ffbeb3219a34d9e5e25e5e02a1c9806e2a0d70141c WatchSource:0}: Error finding container ab8515f21f6116d7dd6124ffbeb3219a34d9e5e25e5e02a1c9806e2a0d70141c: Status 404 returned error can't find the container with id ab8515f21f6116d7dd6124ffbeb3219a34d9e5e25e5e02a1c9806e2a0d70141c Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.069212 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.113311 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.173492 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.212127 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.218365 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.218557 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce8f650c_1729_4d5d_ae70_6cefed6ebe33.slice/crio-d07dd65bf01654bd51264cc43e03760de9872f8704636afcb5ce65dfa70a9b55 WatchSource:0}: Error finding container d07dd65bf01654bd51264cc43e03760de9872f8704636afcb5ce65dfa70a9b55: Status 404 returned error can't find the container with id d07dd65bf01654bd51264cc43e03760de9872f8704636afcb5ce65dfa70a9b55 Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.228077 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0454e048_0e5f_454d_a341_627512f745b9.slice/crio-ef8bb80a1c6820998977168380a58092ab5a37b3ad941e68cecae9635e482f99 WatchSource:0}: Error finding container ef8bb80a1c6820998977168380a58092ab5a37b3ad941e68cecae9635e482f99: Status 404 returned error can't find the container with id ef8bb80a1c6820998977168380a58092ab5a37b3ad941e68cecae9635e482f99 Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.229399 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ee6d09_7d19_49ff_af63_3f24c4bbf6de.slice/crio-36225ad1e966f3fc44ea4633cc0a0962356caf33a7552e98f1bd7429b7f6ce7a WatchSource:0}: Error finding container 36225ad1e966f3fc44ea4633cc0a0962356caf33a7552e98f1bd7429b7f6ce7a: Status 404 returned error can't find the container with id 36225ad1e966f3fc44ea4633cc0a0962356caf33a7552e98f1bd7429b7f6ce7a Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.340295 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.345182 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.348846 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86ae10c6_6dff_4cac_a399_e03bd4de7134.slice/crio-c4d293a6d66c93fe0c3bb7c1289850f0445f37440ca7997f2c529a8f6456572f WatchSource:0}: Error finding container c4d293a6d66c93fe0c3bb7c1289850f0445f37440ca7997f2c529a8f6456572f: Status 404 returned error can't find the container with id c4d293a6d66c93fe0c3bb7c1289850f0445f37440ca7997f2c529a8f6456572f Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.483521 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.483586 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.484022 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.484085 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:20.48406829 +0000 UTC m=+1192.743379441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.484653 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.484698 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:20.484683839 +0000 UTC m=+1192.743994990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.528782 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.544094 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.549176 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0688d3df_a125_4d57_9699_a87d92b140fa.slice/crio-b22f81c516d49e0d340aa17e982307efaf108e8a90bae63a08ef8acfe23ef40c WatchSource:0}: Error finding container b22f81c516d49e0d340aa17e982307efaf108e8a90bae63a08ef8acfe23ef40c: Status 404 returned error can't find the container with id b22f81c516d49e0d340aa17e982307efaf108e8a90bae63a08ef8acfe23ef40c Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.553997 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.554608 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7da04b_f73c_4838_978d_34e4665f3963.slice/crio-73e8c644d4502c60cd2d2fd70f7407683cec7107da3f23d3e0be01e0c5e0e97e WatchSource:0}: Error finding container 73e8c644d4502c60cd2d2fd70f7407683cec7107da3f23d3e0be01e0c5e0e97e: Status 404 returned error can't find the container with id 73e8c644d4502c60cd2d2fd70f7407683cec7107da3f23d3e0be01e0c5e0e97e Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.561261 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.564177 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8129ebe9_8537_403e_9c32_835f54b5d878.slice/crio-25ac1d1306ae55263e964b61d8363f6d59ea0ec325325ae78ed8235c29a8d8e3 WatchSource:0}: Error finding container 25ac1d1306ae55263e964b61d8363f6d59ea0ec325325ae78ed8235c29a8d8e3: Status 404 returned error can't find the container with id 25ac1d1306ae55263e964b61d8363f6d59ea0ec325325ae78ed8235c29a8d8e3 Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.571506 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.650293 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.659740 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-58pk7"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.666181 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-8g592"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.671838 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp"] Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.685529 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zmfmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5784578c99-58pk7_openstack-operators(527edb93-1d3a-45f7-a7c9-f9e28fb6f713): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.686227 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hvqjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-884679f54-pdmtp_openstack-operators(67cd41a3-e91f-4d51-b79a-61d697bbf646): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.686285 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwhrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-767865f676-8g592_openstack-operators(2f2fc86c-b42c-4fd9-94e6-817ed073035d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.686934 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" podUID="527edb93-1d3a-45f7-a7c9-f9e28fb6f713" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.687473 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" podUID="2f2fc86c-b42c-4fd9-94e6-817ed073035d" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.687530 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" podUID="67cd41a3-e91f-4d51-b79a-61d697bbf646" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.782559 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.787709 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.791200 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.791396 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.791446 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:21.791431497 +0000 UTC m=+1194.050742648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.794983 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod547cee69_3d64_49aa_8e95_c19be2bb3089.slice/crio-7aa523b629b8a63c8ed8a40bc8168e2a82b7e143c97893f260f8bcdb6413a45f WatchSource:0}: Error finding container 7aa523b629b8a63c8ed8a40bc8168e2a82b7e143c97893f260f8bcdb6413a45f: Status 404 returned error can't find the container with id 7aa523b629b8a63c8ed8a40bc8168e2a82b7e143c97893f260f8bcdb6413a45f Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.798235 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcb58f9_ad42_41ad_af27_2ca462257e77.slice/crio-784502a1365b3491c92996e59008e0ae2b5ad83461199e657972bc723b7b6312 WatchSource:0}: Error finding container 784502a1365b3491c92996e59008e0ae2b5ad83461199e657972bc723b7b6312: Status 404 returned error can't find the container with id 784502a1365b3491c92996e59008e0ae2b5ad83461199e657972bc723b7b6312 Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.799542 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wpztp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-v4npm_openstack-operators(547cee69-3d64-49aa-8e95-c19be2bb3089): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.800743 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" podUID="547cee69-3d64-49aa-8e95-c19be2bb3089" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.800915 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6gx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vlngd_openstack-operators(3dcb58f9-ad42-41ad-af27-2ca462257e77): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.802085 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" podUID="3dcb58f9-ad42-41ad-af27-2ca462257e77" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.825430 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.837094 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf50bceb5_4fe7_4eba_a9a2_e40f6c89583a.slice/crio-9f3d36e401e6b3772631ef82890e5763e9271c094dd0958793705c7fb73d1918 WatchSource:0}: Error finding container 9f3d36e401e6b3772631ef82890e5763e9271c094dd0958793705c7fb73d1918: Status 404 returned error can't find the container with id 9f3d36e401e6b3772631ef82890e5763e9271c094dd0958793705c7fb73d1918 Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.892345 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.892535 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.892623 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:21.892605346 +0000 UTC m=+1194.151916497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.039475 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" event={"ID":"9b7da04b-f73c-4838-978d-34e4665f3963","Type":"ContainerStarted","Data":"73e8c644d4502c60cd2d2fd70f7407683cec7107da3f23d3e0be01e0c5e0e97e"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.040796 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" event={"ID":"ce8f650c-1729-4d5d-ae70-6cefed6ebe33","Type":"ContainerStarted","Data":"d07dd65bf01654bd51264cc43e03760de9872f8704636afcb5ce65dfa70a9b55"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.041899 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" event={"ID":"0688d3df-a125-4d57-9699-a87d92b140fa","Type":"ContainerStarted","Data":"b22f81c516d49e0d340aa17e982307efaf108e8a90bae63a08ef8acfe23ef40c"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.047568 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" event={"ID":"98ee6d09-7d19-49ff-af63-3f24c4bbf6de","Type":"ContainerStarted","Data":"36225ad1e966f3fc44ea4633cc0a0962356caf33a7552e98f1bd7429b7f6ce7a"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.056293 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" event={"ID":"8129ebe9-8537-403e-9c32-835f54b5d878","Type":"ContainerStarted","Data":"25ac1d1306ae55263e964b61d8363f6d59ea0ec325325ae78ed8235c29a8d8e3"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.058643 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" event={"ID":"0454e048-0e5f-454d-a341-627512f745b9","Type":"ContainerStarted","Data":"ef8bb80a1c6820998977168380a58092ab5a37b3ad941e68cecae9635e482f99"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.059688 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" event={"ID":"f50bceb5-4fe7-4eba-a9a2-e40f6c89583a","Type":"ContainerStarted","Data":"9f3d36e401e6b3772631ef82890e5763e9271c094dd0958793705c7fb73d1918"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.070095 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" event={"ID":"86ae10c6-6dff-4cac-a399-e03bd4de7134","Type":"ContainerStarted","Data":"c4d293a6d66c93fe0c3bb7c1289850f0445f37440ca7997f2c529a8f6456572f"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.077917 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" event={"ID":"547cee69-3d64-49aa-8e95-c19be2bb3089","Type":"ContainerStarted","Data":"7aa523b629b8a63c8ed8a40bc8168e2a82b7e143c97893f260f8bcdb6413a45f"} Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.079623 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" podUID="547cee69-3d64-49aa-8e95-c19be2bb3089" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.087174 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" event={"ID":"d9bea0a5-4e0c-4eec-8c57-465238459ec5","Type":"ContainerStarted","Data":"ab8515f21f6116d7dd6124ffbeb3219a34d9e5e25e5e02a1c9806e2a0d70141c"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.108622 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" event={"ID":"8035ac49-bf5e-4c7a-801a-2e0a9acdbec8","Type":"ContainerStarted","Data":"1de07cce7bd6f56ba57d6a22bd7ec9200c1b500006113e31e76041a1b80633ea"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.124045 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" event={"ID":"489b4c0d-9288-4e00-84ac-23fb05767840","Type":"ContainerStarted","Data":"e596847afc4a0bb34dac621dbd0fd51d833084f6850522532affbdac0f50afd1"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.125501 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" event={"ID":"2f2fc86c-b42c-4fd9-94e6-817ed073035d","Type":"ContainerStarted","Data":"69a4f84916dee0993277b689b1450283bb636bf87cd6ef179042c0741f1d9dfe"} Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.127153 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" podUID="2f2fc86c-b42c-4fd9-94e6-817ed073035d" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.128393 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" event={"ID":"3dcb58f9-ad42-41ad-af27-2ca462257e77","Type":"ContainerStarted","Data":"784502a1365b3491c92996e59008e0ae2b5ad83461199e657972bc723b7b6312"} Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.129042 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" podUID="3dcb58f9-ad42-41ad-af27-2ca462257e77" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.129924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" event={"ID":"527edb93-1d3a-45f7-a7c9-f9e28fb6f713","Type":"ContainerStarted","Data":"d1fbfd7b424fa4253dc3a511a6db7e171f8b1c0d710bf2f8760d06b44636bd6d"} Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.130964 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" podUID="527edb93-1d3a-45f7-a7c9-f9e28fb6f713" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.131350 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" event={"ID":"84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6","Type":"ContainerStarted","Data":"8ab2bfe997556781d511317ef3f855d329a1b5c1f3b6c7164c6129c5870230ff"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.133066 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" event={"ID":"67cd41a3-e91f-4d51-b79a-61d697bbf646","Type":"ContainerStarted","Data":"e18dc0069f153c8d66784f7c817fe581a8452467910123083a46b67fef56dbdf"} Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.134233 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" podUID="67cd41a3-e91f-4d51-b79a-61d697bbf646" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.136133 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" event={"ID":"e85f51ac-f1e1-4299-91a6-9b27dcc50967","Type":"ContainerStarted","Data":"54b23b25710de9bef8e5102ed9ee2c52b2dcfdeabcb64dcb89e1d1734e62f220"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.503881 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.503935 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.505181 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.505228 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:22.505214276 +0000 UTC m=+1194.764525427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.505543 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.505570 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:22.505563535 +0000 UTC m=+1194.764874686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.149180 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" podUID="3dcb58f9-ad42-41ad-af27-2ca462257e77" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.149228 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" podUID="527edb93-1d3a-45f7-a7c9-f9e28fb6f713" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.149499 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" podUID="547cee69-3d64-49aa-8e95-c19be2bb3089" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.149552 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" podUID="67cd41a3-e91f-4d51-b79a-61d697bbf646" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.149594 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" podUID="2f2fc86c-b42c-4fd9-94e6-817ed073035d" Mar 20 07:09:21 crc kubenswrapper[5136]: I0320 07:09:21.826533 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.826732 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.826801 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:25.826781955 +0000 UTC m=+1198.086093106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:21 crc kubenswrapper[5136]: I0320 07:09:21.927474 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.927655 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.927753 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:25.927738499 +0000 UTC m=+1198.187049730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:22 crc kubenswrapper[5136]: I0320 07:09:22.535375 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:22 crc kubenswrapper[5136]: I0320 07:09:22.536297 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:22 crc kubenswrapper[5136]: E0320 07:09:22.536036 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:22 crc kubenswrapper[5136]: E0320 07:09:22.536419 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:26.536399498 +0000 UTC m=+1198.795710649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:22 crc kubenswrapper[5136]: E0320 07:09:22.536442 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:22 crc kubenswrapper[5136]: E0320 07:09:22.536494 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:26.53647727 +0000 UTC m=+1198.795788421 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:25 crc kubenswrapper[5136]: I0320 07:09:25.879940 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:25 crc kubenswrapper[5136]: E0320 07:09:25.880139 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:25 crc kubenswrapper[5136]: E0320 07:09:25.880424 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:33.880407446 +0000 UTC m=+1206.139718597 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:25 crc kubenswrapper[5136]: I0320 07:09:25.981849 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:25 crc kubenswrapper[5136]: E0320 07:09:25.982038 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:25 crc kubenswrapper[5136]: E0320 07:09:25.982126 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:33.982106622 +0000 UTC m=+1206.241417773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:26 crc kubenswrapper[5136]: I0320 07:09:26.600559 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:26 crc kubenswrapper[5136]: I0320 07:09:26.600637 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:26 crc kubenswrapper[5136]: E0320 07:09:26.600757 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:26 crc kubenswrapper[5136]: E0320 07:09:26.600791 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:26 crc kubenswrapper[5136]: E0320 07:09:26.600828 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:34.600795445 +0000 UTC m=+1206.860106596 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:26 crc kubenswrapper[5136]: E0320 07:09:26.600845 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:34.600835576 +0000 UTC m=+1206.860146727 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.243544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" event={"ID":"95dfc6ea-897c-4133-ab1e-cefc81ab0623","Type":"ContainerStarted","Data":"7e7cb354942cada64e9d21bbf6a43976fbbd08570a40df72e9e9e4016460cc0c"} Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.244127 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.268264 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" podStartSLOduration=2.285815806 podStartE2EDuration="16.268244697s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:18.839055189 +0000 UTC m=+1191.098366340" lastFinishedPulling="2026-03-20 07:09:32.82148408 +0000 UTC m=+1205.080795231" observedRunningTime="2026-03-20 07:09:33.263734839 +0000 UTC m=+1205.523045990" watchObservedRunningTime="2026-03-20 07:09:33.268244697 +0000 UTC m=+1205.527555848" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.271395 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" event={"ID":"84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6","Type":"ContainerStarted","Data":"a6a388a64ae4dc140d91d31ee9ed9a06b37ae39f77a009e66dea771076c17470"} Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.271450 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.275203 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" event={"ID":"0454e048-0e5f-454d-a341-627512f745b9","Type":"ContainerStarted","Data":"063f1a6c6d535af49bc5857c8228402ce866d32f2fe6945410ffe65a1a440302"} Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.275611 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.290952 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" podStartSLOduration=3.025117728 podStartE2EDuration="16.290933475s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.554928471 +0000 UTC m=+1191.814239622" lastFinishedPulling="2026-03-20 07:09:32.820744218 +0000 UTC m=+1205.080055369" observedRunningTime="2026-03-20 07:09:33.286336245 +0000 UTC m=+1205.545647416" watchObservedRunningTime="2026-03-20 07:09:33.290933475 +0000 UTC m=+1205.550244626" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.308535 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" podStartSLOduration=2.719308828 podStartE2EDuration="16.308518708s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.231648471 +0000 UTC m=+1191.490959622" lastFinishedPulling="2026-03-20 07:09:32.820858351 +0000 UTC m=+1205.080169502" observedRunningTime="2026-03-20 07:09:33.305224138 +0000 UTC m=+1205.564535299" watchObservedRunningTime="2026-03-20 07:09:33.308518708 +0000 UTC m=+1205.567829859" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.915317 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:33 crc kubenswrapper[5136]: E0320 07:09:33.915547 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:33 crc kubenswrapper[5136]: E0320 07:09:33.915636 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:49.915615029 +0000 UTC m=+1222.174926230 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.020862 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.021044 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.021123 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:50.021101161 +0000 UTC m=+1222.280412312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.292382 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" event={"ID":"86ae10c6-6dff-4cac-a399-e03bd4de7134","Type":"ContainerStarted","Data":"842c003c7f7dc315dd86d99ba05f17e78b2dde6ec9264b74ab4d30b3f5643304"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.292753 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.298404 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" event={"ID":"98ee6d09-7d19-49ff-af63-3f24c4bbf6de","Type":"ContainerStarted","Data":"0775020ccbb5ad709406226ca37fc3cbe4d6f1d213a64c9c483d9e66951e4ce1"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.298493 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.299919 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" event={"ID":"8129ebe9-8537-403e-9c32-835f54b5d878","Type":"ContainerStarted","Data":"3375eef257c6e9c649cc18879ad127be29f46187af3180adb95fb0f7e23faf5f"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.300249 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.303523 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" event={"ID":"ce8f650c-1729-4d5d-ae70-6cefed6ebe33","Type":"ContainerStarted","Data":"c9227d9cc0f326a24aa8a02f9e68c0ceccbe64bf2944b0107c9f7ef44f8f0e24"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.303671 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.311612 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" event={"ID":"d9bea0a5-4e0c-4eec-8c57-465238459ec5","Type":"ContainerStarted","Data":"2436205d12b77089f4817d2475f2a18f2f74134b7ef87932421e011b548c9233"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.311737 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.316408 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" podStartSLOduration=3.845727647 podStartE2EDuration="17.3163912s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.350688533 +0000 UTC m=+1191.609999684" lastFinishedPulling="2026-03-20 07:09:32.821352086 +0000 UTC m=+1205.080663237" observedRunningTime="2026-03-20 07:09:34.311794821 +0000 UTC m=+1206.571105972" watchObservedRunningTime="2026-03-20 07:09:34.3163912 +0000 UTC m=+1206.575702351" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.324964 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" event={"ID":"8035ac49-bf5e-4c7a-801a-2e0a9acdbec8","Type":"ContainerStarted","Data":"cd04d3f98382b33d52bbf480d7d633e36646d8e359cb3c02ddd1a559019b8567"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.325076 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.334039 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" event={"ID":"0688d3df-a125-4d57-9699-a87d92b140fa","Type":"ContainerStarted","Data":"53b4d7f8b56ae02327813017827bb357ec04040f06a42e0ec88eb67a6a44acf7"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.334183 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.337635 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" podStartSLOduration=3.53706048 podStartE2EDuration="17.337618564s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.060972112 +0000 UTC m=+1191.320283263" lastFinishedPulling="2026-03-20 07:09:32.861530196 +0000 UTC m=+1205.120841347" observedRunningTime="2026-03-20 07:09:34.332225841 +0000 UTC m=+1206.591536992" watchObservedRunningTime="2026-03-20 07:09:34.337618564 +0000 UTC m=+1206.596929715" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.338009 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" event={"ID":"86f2c200-3fc8-4ff8-abbd-4e9196951c84","Type":"ContainerStarted","Data":"c814c7282ac1caf858e6667c6be6d5178233fb8bd64b230ef67ffabfe77c1149"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.338150 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.342707 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" event={"ID":"9b7da04b-f73c-4838-978d-34e4665f3963","Type":"ContainerStarted","Data":"a8fc3251644dfe3907082bf6f4dfa2b2b6a45c56ce03d184ebadb69d09515a34"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.342766 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.351042 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" event={"ID":"e85f51ac-f1e1-4299-91a6-9b27dcc50967","Type":"ContainerStarted","Data":"621eb64767f303013aaf2fcb2f1b552cdc1de0ef6b369126ad639ee163c6da8c"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.351172 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.356239 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" event={"ID":"489b4c0d-9288-4e00-84ac-23fb05767840","Type":"ContainerStarted","Data":"59ffce7312913677bb45c8b599d43c49d6caf11830634af95385c1f867b164ec"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.356368 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.362881 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" event={"ID":"f50bceb5-4fe7-4eba-a9a2-e40f6c89583a","Type":"ContainerStarted","Data":"b548c05a98e376989922a707ba3ff580ddacc5bb99dd7352a13cdd8b7a9fb831"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.362912 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.366574 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" podStartSLOduration=3.116876955 podStartE2EDuration="16.366565022s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.571739851 +0000 UTC m=+1191.831051002" lastFinishedPulling="2026-03-20 07:09:32.821427918 +0000 UTC m=+1205.080739069" observedRunningTime="2026-03-20 07:09:34.363708135 +0000 UTC m=+1206.623019286" watchObservedRunningTime="2026-03-20 07:09:34.366565022 +0000 UTC m=+1206.625876163" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.388506 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" podStartSLOduration=3.757408387 podStartE2EDuration="17.388490937s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.232073144 +0000 UTC m=+1191.491384295" lastFinishedPulling="2026-03-20 07:09:32.863155694 +0000 UTC m=+1205.122466845" observedRunningTime="2026-03-20 07:09:34.383338541 +0000 UTC m=+1206.642649692" watchObservedRunningTime="2026-03-20 07:09:34.388490937 +0000 UTC m=+1206.647802088" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.406530 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" podStartSLOduration=3.8140552850000002 podStartE2EDuration="17.406515374s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.228862306 +0000 UTC m=+1191.488173457" lastFinishedPulling="2026-03-20 07:09:32.821322395 +0000 UTC m=+1205.080633546" observedRunningTime="2026-03-20 07:09:34.405800173 +0000 UTC m=+1206.665111324" watchObservedRunningTime="2026-03-20 07:09:34.406515374 +0000 UTC m=+1206.665826525" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.432319 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" podStartSLOduration=3.133036005 podStartE2EDuration="16.432304477s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.561005085 +0000 UTC m=+1191.820316236" lastFinishedPulling="2026-03-20 07:09:32.860273557 +0000 UTC m=+1205.119584708" observedRunningTime="2026-03-20 07:09:34.430431361 +0000 UTC m=+1206.689742512" watchObservedRunningTime="2026-03-20 07:09:34.432304477 +0000 UTC m=+1206.691615628" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.459171 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" podStartSLOduration=3.506368069 podStartE2EDuration="17.459150291s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:18.868652437 +0000 UTC m=+1191.127963588" lastFinishedPulling="2026-03-20 07:09:32.821434659 +0000 UTC m=+1205.080745810" observedRunningTime="2026-03-20 07:09:34.452561832 +0000 UTC m=+1206.711872983" watchObservedRunningTime="2026-03-20 07:09:34.459150291 +0000 UTC m=+1206.718461442" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.517527 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" podStartSLOduration=4.003489263 podStartE2EDuration="17.517506452s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.348375723 +0000 UTC m=+1191.607686864" lastFinishedPulling="2026-03-20 07:09:32.862392902 +0000 UTC m=+1205.121704053" observedRunningTime="2026-03-20 07:09:34.477624293 +0000 UTC m=+1206.736935444" watchObservedRunningTime="2026-03-20 07:09:34.517506452 +0000 UTC m=+1206.776817603" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.529540 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" podStartSLOduration=4.184925729 podStartE2EDuration="17.529522947s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.556671453 +0000 UTC m=+1191.815982604" lastFinishedPulling="2026-03-20 07:09:32.901268671 +0000 UTC m=+1205.160579822" observedRunningTime="2026-03-20 07:09:34.524472344 +0000 UTC m=+1206.783783505" watchObservedRunningTime="2026-03-20 07:09:34.529522947 +0000 UTC m=+1206.788834098" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.589177 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" podStartSLOduration=3.401532662 podStartE2EDuration="16.589159557s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.671549029 +0000 UTC m=+1191.930860180" lastFinishedPulling="2026-03-20 07:09:32.859175924 +0000 UTC m=+1205.118487075" observedRunningTime="2026-03-20 07:09:34.588702353 +0000 UTC m=+1206.848013504" watchObservedRunningTime="2026-03-20 07:09:34.589159557 +0000 UTC m=+1206.848470708" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.589439 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" podStartSLOduration=4.2809341530000005 podStartE2EDuration="17.589435945s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.554682923 +0000 UTC m=+1191.813994074" lastFinishedPulling="2026-03-20 07:09:32.863184715 +0000 UTC m=+1205.122495866" observedRunningTime="2026-03-20 07:09:34.54412894 +0000 UTC m=+1206.803440091" watchObservedRunningTime="2026-03-20 07:09:34.589435945 +0000 UTC m=+1206.848747096" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.630914 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" podStartSLOduration=3.648632509 podStartE2EDuration="16.630894053s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.839149264 +0000 UTC m=+1192.098460415" lastFinishedPulling="2026-03-20 07:09:32.821410808 +0000 UTC m=+1205.080721959" observedRunningTime="2026-03-20 07:09:34.630418738 +0000 UTC m=+1206.889729889" watchObservedRunningTime="2026-03-20 07:09:34.630894053 +0000 UTC m=+1206.890205204" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.631090 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.631156 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.631289 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.631374 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:50.631355517 +0000 UTC m=+1222.890666668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.631310 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.631434 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:50.631416929 +0000 UTC m=+1222.890728080 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.392330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" event={"ID":"67cd41a3-e91f-4d51-b79a-61d697bbf646","Type":"ContainerStarted","Data":"da5cf7017c8f5d726fc27efce7259295460743dcd772f95d8370fb0073663974"} Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.395045 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.395351 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" event={"ID":"527edb93-1d3a-45f7-a7c9-f9e28fb6f713","Type":"ContainerStarted","Data":"e6f489cd961c6ab5369412ab0ada6d459daf4ad2149b00b608db522b4f2f6027"} Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.395595 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.410609 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" podStartSLOduration=2.35344406 podStartE2EDuration="19.410593768s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.6860866 +0000 UTC m=+1191.945397751" lastFinishedPulling="2026-03-20 07:09:36.743236308 +0000 UTC m=+1209.002547459" observedRunningTime="2026-03-20 07:09:37.40768123 +0000 UTC m=+1209.666992381" watchObservedRunningTime="2026-03-20 07:09:37.410593768 +0000 UTC m=+1209.669904919" Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.423175 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" podStartSLOduration=2.358590547 podStartE2EDuration="19.4231544s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.685293506 +0000 UTC m=+1191.944604657" lastFinishedPulling="2026-03-20 07:09:36.749857359 +0000 UTC m=+1209.009168510" observedRunningTime="2026-03-20 07:09:37.418486648 +0000 UTC m=+1209.677797789" watchObservedRunningTime="2026-03-20 07:09:37.4231544 +0000 UTC m=+1209.682465551" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.114171 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.138706 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.151377 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.206026 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.249207 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.294571 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.323712 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.369190 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.469632 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.519858 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.551340 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.553761 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.623722 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.646711 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:39 crc kubenswrapper[5136]: I0320 07:09:39.071313 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.439160 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" event={"ID":"3dcb58f9-ad42-41ad-af27-2ca462257e77","Type":"ContainerStarted","Data":"0d0fe534f35939034613c97846211d407cd8c5249c5e40e9993476f79ecceb60"} Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.442038 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" event={"ID":"547cee69-3d64-49aa-8e95-c19be2bb3089","Type":"ContainerStarted","Data":"8cbca177b4dcb874725fd915a9e4e4ec29b7155efc98a2fc4feb5ba129fdf1ca"} Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.442226 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.443693 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" event={"ID":"2f2fc86c-b42c-4fd9-94e6-817ed073035d","Type":"ContainerStarted","Data":"d472ff1e308210958e334cd25c2c28645dee8cabbd1f241342f9646b495b942f"} Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.443850 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.453590 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" podStartSLOduration=2.275244848 podStartE2EDuration="22.453569532s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.799684798 +0000 UTC m=+1192.058995949" lastFinishedPulling="2026-03-20 07:09:39.978009482 +0000 UTC m=+1212.237320633" observedRunningTime="2026-03-20 07:09:40.451782788 +0000 UTC m=+1212.711093949" watchObservedRunningTime="2026-03-20 07:09:40.453569532 +0000 UTC m=+1212.712880683" Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.467137 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" podStartSLOduration=2.304309569 podStartE2EDuration="22.467110602s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.799398999 +0000 UTC m=+1192.058710150" lastFinishedPulling="2026-03-20 07:09:39.962200032 +0000 UTC m=+1212.221511183" observedRunningTime="2026-03-20 07:09:40.466001919 +0000 UTC m=+1212.725313110" watchObservedRunningTime="2026-03-20 07:09:40.467110602 +0000 UTC m=+1212.726421753" Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.494299 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" podStartSLOduration=3.218241988 podStartE2EDuration="23.494270597s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.686169323 +0000 UTC m=+1191.945480484" lastFinishedPulling="2026-03-20 07:09:39.962197942 +0000 UTC m=+1212.221509093" observedRunningTime="2026-03-20 07:09:40.482283573 +0000 UTC m=+1212.741594744" watchObservedRunningTime="2026-03-20 07:09:40.494270597 +0000 UTC m=+1212.753581748" Mar 20 07:09:48 crc kubenswrapper[5136]: I0320 07:09:48.603345 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:48 crc kubenswrapper[5136]: I0320 07:09:48.868358 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:48 crc kubenswrapper[5136]: I0320 07:09:48.869690 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:48 crc kubenswrapper[5136]: I0320 07:09:48.986365 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:49 crc kubenswrapper[5136]: I0320 07:09:49.979191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:49 crc kubenswrapper[5136]: I0320 07:09:49.986119 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.081763 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.086278 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.139130 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7crl6" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.148495 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.336989 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj"] Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.364499 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-d2gj7" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.373439 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.527246 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" event={"ID":"fad403b0-ff16-4bfe-a0e3-8f0da431260b","Type":"ContainerStarted","Data":"c45f57dfd4dbf4cc3c6c85e29d0ca8e3f12c8b75411bfc314fb8af947c6e4f1d"} Mar 20 07:09:50 crc kubenswrapper[5136]: W0320 07:09:50.659936 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10cd2a26_beca_4a3b_a791_83cc8cc451ab.slice/crio-2632c2dcdcd518ca8df75fe73147b96d28212e15f6327025e7ba8eb0aaf38cc0 WatchSource:0}: Error finding container 2632c2dcdcd518ca8df75fe73147b96d28212e15f6327025e7ba8eb0aaf38cc0: Status 404 returned error can't find the container with id 2632c2dcdcd518ca8df75fe73147b96d28212e15f6327025e7ba8eb0aaf38cc0 Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.661845 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf"] Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.688960 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.689094 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.693234 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.693368 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.892330 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cldn9" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.901045 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:51 crc kubenswrapper[5136]: I0320 07:09:51.381416 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5"] Mar 20 07:09:51 crc kubenswrapper[5136]: I0320 07:09:51.535257 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" event={"ID":"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8","Type":"ContainerStarted","Data":"456a80b026269f2ff0688c96d61d97f3845ab6551718078d63b92ce2085414c2"} Mar 20 07:09:51 crc kubenswrapper[5136]: I0320 07:09:51.536458 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" event={"ID":"10cd2a26-beca-4a3b-a791-83cc8cc451ab","Type":"ContainerStarted","Data":"2632c2dcdcd518ca8df75fe73147b96d28212e15f6327025e7ba8eb0aaf38cc0"} Mar 20 07:09:56 crc kubenswrapper[5136]: I0320 07:09:56.580487 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" event={"ID":"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8","Type":"ContainerStarted","Data":"a34ff85e4fc0d37e3fa38dc38f2b4e08264225bc05b72ba2ed359276601fc32c"} Mar 20 07:09:56 crc kubenswrapper[5136]: I0320 07:09:56.581200 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:56 crc kubenswrapper[5136]: I0320 07:09:56.615519 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" podStartSLOduration=38.615500197 podStartE2EDuration="38.615500197s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:09:56.606244875 +0000 UTC m=+1228.865556026" watchObservedRunningTime="2026-03-20 07:09:56.615500197 +0000 UTC m=+1228.874811338" Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.595118 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" event={"ID":"fad403b0-ff16-4bfe-a0e3-8f0da431260b","Type":"ContainerStarted","Data":"d2777db4006c3f3e7ce92a899ea2151eedfb3124b820af30e500519dbf78c310"} Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.595975 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.596729 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" event={"ID":"10cd2a26-beca-4a3b-a791-83cc8cc451ab","Type":"ContainerStarted","Data":"009dd88a073a3ca0707eb2178f3e0cf30b01ff906278d4101646069783bf33eb"} Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.597186 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.628130 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" podStartSLOduration=33.903403674 podStartE2EDuration="41.628111436s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:50.347988271 +0000 UTC m=+1222.607299412" lastFinishedPulling="2026-03-20 07:09:58.072696013 +0000 UTC m=+1230.332007174" observedRunningTime="2026-03-20 07:09:58.6249683 +0000 UTC m=+1230.884279451" watchObservedRunningTime="2026-03-20 07:09:58.628111436 +0000 UTC m=+1230.887422587" Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.660692 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" podStartSLOduration=33.254252351 podStartE2EDuration="40.660673304s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:50.662210626 +0000 UTC m=+1222.921521777" lastFinishedPulling="2026-03-20 07:09:58.068631579 +0000 UTC m=+1230.327942730" observedRunningTime="2026-03-20 07:09:58.6585868 +0000 UTC m=+1230.917897951" watchObservedRunningTime="2026-03-20 07:09:58.660673304 +0000 UTC m=+1230.919984455" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.141210 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566510-bn9cf"] Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.142510 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.145152 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.147105 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.147442 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.192188 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566510-bn9cf"] Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.253505 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd4pc\" (UniqueName: \"kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc\") pod \"auto-csr-approver-29566510-bn9cf\" (UID: \"17242c2e-8526-49cf-89dd-e35bd97c6626\") " pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.355240 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd4pc\" (UniqueName: \"kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc\") pod \"auto-csr-approver-29566510-bn9cf\" (UID: \"17242c2e-8526-49cf-89dd-e35bd97c6626\") " pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.372363 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd4pc\" (UniqueName: \"kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc\") pod \"auto-csr-approver-29566510-bn9cf\" (UID: \"17242c2e-8526-49cf-89dd-e35bd97c6626\") " pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.512987 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.967052 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566510-bn9cf"] Mar 20 07:10:00 crc kubenswrapper[5136]: W0320 07:10:00.972116 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17242c2e_8526_49cf_89dd_e35bd97c6626.slice/crio-d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3 WatchSource:0}: Error finding container d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3: Status 404 returned error can't find the container with id d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3 Mar 20 07:10:01 crc kubenswrapper[5136]: I0320 07:10:01.638405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" event={"ID":"17242c2e-8526-49cf-89dd-e35bd97c6626","Type":"ContainerStarted","Data":"d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3"} Mar 20 07:10:03 crc kubenswrapper[5136]: I0320 07:10:03.653963 5136 generic.go:334] "Generic (PLEG): container finished" podID="17242c2e-8526-49cf-89dd-e35bd97c6626" containerID="a922963e448f67de5c7ef7e39ae9a8fe1051c4a0abe704c7b54dc25c09d90caa" exitCode=0 Mar 20 07:10:03 crc kubenswrapper[5136]: I0320 07:10:03.654033 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" event={"ID":"17242c2e-8526-49cf-89dd-e35bd97c6626","Type":"ContainerDied","Data":"a922963e448f67de5c7ef7e39ae9a8fe1051c4a0abe704c7b54dc25c09d90caa"} Mar 20 07:10:04 crc kubenswrapper[5136]: I0320 07:10:04.976130 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.121953 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd4pc\" (UniqueName: \"kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc\") pod \"17242c2e-8526-49cf-89dd-e35bd97c6626\" (UID: \"17242c2e-8526-49cf-89dd-e35bd97c6626\") " Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.127525 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc" (OuterVolumeSpecName: "kube-api-access-fd4pc") pod "17242c2e-8526-49cf-89dd-e35bd97c6626" (UID: "17242c2e-8526-49cf-89dd-e35bd97c6626"). InnerVolumeSpecName "kube-api-access-fd4pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.223262 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd4pc\" (UniqueName: \"kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.671714 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" event={"ID":"17242c2e-8526-49cf-89dd-e35bd97c6626","Type":"ContainerDied","Data":"d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3"} Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.671773 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3" Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.671771 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:06 crc kubenswrapper[5136]: I0320 07:10:06.071214 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566504-fnsrq"] Mar 20 07:10:06 crc kubenswrapper[5136]: I0320 07:10:06.081610 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566504-fnsrq"] Mar 20 07:10:06 crc kubenswrapper[5136]: I0320 07:10:06.410930 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e1a6ad-3e5f-4a83-b429-d132710b8146" path="/var/lib/kubelet/pods/f8e1a6ad-3e5f-4a83-b429-d132710b8146/volumes" Mar 20 07:10:10 crc kubenswrapper[5136]: I0320 07:10:10.157039 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:10:10 crc kubenswrapper[5136]: I0320 07:10:10.380522 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:10:10 crc kubenswrapper[5136]: I0320 07:10:10.906430 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.927770 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:25 crc kubenswrapper[5136]: E0320 07:10:25.928447 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17242c2e-8526-49cf-89dd-e35bd97c6626" containerName="oc" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.928458 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="17242c2e-8526-49cf-89dd-e35bd97c6626" containerName="oc" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.928636 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="17242c2e-8526-49cf-89dd-e35bd97c6626" containerName="oc" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.929376 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.931276 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.931545 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.931694 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fj6zh" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.931845 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.949561 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.038943 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.039957 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.042704 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.047338 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.091376 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.091471 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67tl9\" (UniqueName: \"kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.192779 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.192943 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rkmb\" (UniqueName: \"kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.193198 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67tl9\" (UniqueName: \"kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.193305 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.193379 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.193773 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.219361 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67tl9\" (UniqueName: \"kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.252692 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.294792 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.294851 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.294895 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rkmb\" (UniqueName: \"kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.296073 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.296792 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.312484 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rkmb\" (UniqueName: \"kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.357775 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.735220 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.745016 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.797682 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:26 crc kubenswrapper[5136]: W0320 07:10:26.805299 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dd8ad22_4946_4d2c_b2cb_a38f42166c88.slice/crio-0124fdc9a4d5b014a0d0715276e212228f8d98c3844a190558ceec06845de8bc WatchSource:0}: Error finding container 0124fdc9a4d5b014a0d0715276e212228f8d98c3844a190558ceec06845de8bc: Status 404 returned error can't find the container with id 0124fdc9a4d5b014a0d0715276e212228f8d98c3844a190558ceec06845de8bc Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.857220 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64696987c5-86bqj" event={"ID":"1dd8ad22-4946-4d2c-b2cb-a38f42166c88","Type":"ContainerStarted","Data":"0124fdc9a4d5b014a0d0715276e212228f8d98c3844a190558ceec06845de8bc"} Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.858444 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" event={"ID":"89d4f1a0-0e10-49e6-98bc-43920e03caba","Type":"ContainerStarted","Data":"b1739b4168d54cbd082e621d7a6b119af2a011d7e88735613ee365c8955655aa"} Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.227738 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.241149 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.242147 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.248988 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.421970 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.422324 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.422369 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln7fj\" (UniqueName: \"kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.523427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.523471 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.523501 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln7fj\" (UniqueName: \"kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.524582 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.525149 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.542022 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln7fj\" (UniqueName: \"kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.565704 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.845073 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.861370 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.862462 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.886754 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.033456 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.033526 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw97g\" (UniqueName: \"kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.033576 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.102176 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.136649 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.136711 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw97g\" (UniqueName: \"kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.136779 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.137665 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.138207 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.158171 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw97g\" (UniqueName: \"kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.189939 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.334627 5136 scope.go:117] "RemoveContainer" containerID="a9c6142c6c3be406a353a6109a8cb8b7b38a7799c67785c8207003ce9a223a42" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.406805 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.412123 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.414193 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.414377 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.414503 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x9v8f" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.414526 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.416516 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.416736 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.416761 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.439572 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.460050 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:29 crc kubenswrapper[5136]: W0320 07:10:29.472589 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod571c2781_59c0_4345_9a04_09a51ceabc0d.slice/crio-603105be24fa6a6cb4daf90aa8e7faaf32594389e035752cfb2e9cf8a54926bd WatchSource:0}: Error finding container 603105be24fa6a6cb4daf90aa8e7faaf32594389e035752cfb2e9cf8a54926bd: Status 404 returned error can't find the container with id 603105be24fa6a6cb4daf90aa8e7faaf32594389e035752cfb2e9cf8a54926bd Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542593 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542642 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542687 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542710 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542730 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542766 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542844 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542875 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49dt\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542908 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542945 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542999 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644476 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644550 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644583 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p49dt\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644606 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644651 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644683 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644725 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644747 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644775 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644798 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644839 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.645010 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.646012 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.646360 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.646667 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.646690 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.647943 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.655705 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.667325 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.667862 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.668509 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.671858 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.676233 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p49dt\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.736420 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.881164 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" event={"ID":"571c2781-59c0-4345-9a04-09a51ceabc0d","Type":"ContainerStarted","Data":"603105be24fa6a6cb4daf90aa8e7faaf32594389e035752cfb2e9cf8a54926bd"} Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.883077 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" event={"ID":"90e44514-0ddc-4151-ad00-cf458d5adf9e","Type":"ContainerStarted","Data":"a835263469ad6ded88770537144f67af862b9ca0e4d044183f6bbb2b8ff9cb68"} Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.991794 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.993149 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.995371 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.995415 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.995452 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.995571 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-88lhx" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.995718 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.999697 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.999871 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.018378 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165411 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165482 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165594 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skh55\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165697 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165722 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.166051 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.166145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.166175 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.166207 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.166237 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.204150 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:10:30 crc kubenswrapper[5136]: W0320 07:10:30.217538 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261514f8_7734_423d_b15a_e83fdc2a85fd.slice/crio-3dd70fd22c8a29190bae59972f90dd4530137cb93e7e5a8ebd5a576dd4e2a33b WatchSource:0}: Error finding container 3dd70fd22c8a29190bae59972f90dd4530137cb93e7e5a8ebd5a576dd4e2a33b: Status 404 returned error can't find the container with id 3dd70fd22c8a29190bae59972f90dd4530137cb93e7e5a8ebd5a576dd4e2a33b Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268229 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268428 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268520 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268540 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268572 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268591 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skh55\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268639 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268662 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268722 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268743 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268762 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.269148 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.269480 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.269667 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.270132 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.270718 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.271849 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.276543 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.278897 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.289786 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.292972 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.293681 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skh55\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.304708 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.317388 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.893360 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerStarted","Data":"3dd70fd22c8a29190bae59972f90dd4530137cb93e7e5a8ebd5a576dd4e2a33b"} Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.494613 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.495789 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.497726 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.497875 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7hd6r" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.500442 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.500725 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.506034 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.506246 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.593943 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594005 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594104 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594144 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594171 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594236 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594293 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgbx\" (UniqueName: \"kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.695638 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.695728 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.696008 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.696710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698089 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698267 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698304 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698348 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgbx\" (UniqueName: \"kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698421 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698553 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698691 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.700323 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.710551 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.715533 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.717665 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgbx\" (UniqueName: \"kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.722351 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.814632 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.962781 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.965216 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.970233 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.970749 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.971060 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-t6q78" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.971147 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.979113 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.126892 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.126956 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.126982 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pv5f\" (UniqueName: \"kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.127152 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.127230 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.127282 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.127322 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.127371 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228429 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228502 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228536 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228568 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228597 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228633 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228656 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228679 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pv5f\" (UniqueName: \"kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.229647 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.230597 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.239728 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.246977 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.249524 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.249553 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.251353 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pv5f\" (UniqueName: \"kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.255537 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.275315 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.291227 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.346106 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.347134 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.356584 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-96ds2" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.356836 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.357346 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.367705 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.435539 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.435794 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhrz\" (UniqueName: \"kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.436032 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.436090 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.436125 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.537413 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.537478 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.537501 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.537561 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.537611 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhrz\" (UniqueName: \"kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.538978 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.539334 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.542379 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.543627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.555325 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhrz\" (UniqueName: \"kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.684310 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.318491 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.319832 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.322412 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-r4xg6" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.390007 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.468965 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-478m9\" (UniqueName: \"kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9\") pod \"kube-state-metrics-0\" (UID: \"cf624d46-ce35-4e7f-b463-4b0eba006ded\") " pod="openstack/kube-state-metrics-0" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.570709 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-478m9\" (UniqueName: \"kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9\") pod \"kube-state-metrics-0\" (UID: \"cf624d46-ce35-4e7f-b463-4b0eba006ded\") " pod="openstack/kube-state-metrics-0" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.592653 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-478m9\" (UniqueName: \"kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9\") pod \"kube-state-metrics-0\" (UID: \"cf624d46-ce35-4e7f-b463-4b0eba006ded\") " pod="openstack/kube-state-metrics-0" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.640853 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:10:37 crc kubenswrapper[5136]: I0320 07:10:37.490078 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.346054 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.347735 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.351386 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.351551 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-txsj2" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.352079 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.352358 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.352711 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.358528 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.439415 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.440619 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.440783 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.440906 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.440945 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.440965 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.441015 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.441043 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.441067 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbm7\" (UniqueName: \"kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.441113 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.443417 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.443591 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.443724 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7cjkk" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.448215 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.458761 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.460534 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.466962 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543477 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543542 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmjf\" (UniqueName: \"kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543577 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543618 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543650 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbm7\" (UniqueName: \"kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543700 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543735 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543787 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544421 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544464 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544491 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544516 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544563 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfl7\" (UniqueName: \"kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544590 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544611 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544638 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544663 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544686 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.545126 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544712 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.545341 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.545377 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.545392 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.552003 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.552204 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.565063 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbm7\" (UniqueName: \"kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.565329 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.566461 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.570176 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.571609 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646557 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646623 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646642 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646682 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646709 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646744 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqmjf\" (UniqueName: \"kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646767 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646786 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646834 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646867 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646890 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646917 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646937 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhfl7\" (UniqueName: \"kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.647275 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.647329 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.647490 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.647618 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.647988 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.648073 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.650377 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.652640 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.652796 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.653074 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.653098 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.664019 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqmjf\" (UniqueName: \"kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.664952 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhfl7\" (UniqueName: \"kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.687829 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.759078 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.777198 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.655347 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.659771 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.662810 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.662989 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.663141 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7jqrk" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.664335 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.666191 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785416 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785481 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785617 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785758 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785793 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd6zc\" (UniqueName: \"kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785845 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785891 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785961 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888034 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888271 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888354 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888382 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd6zc\" (UniqueName: \"kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888410 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888445 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888496 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888777 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.889335 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.889497 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.890034 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.896715 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.896733 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.902441 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.902945 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd6zc\" (UniqueName: \"kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.913694 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.980238 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:46 crc kubenswrapper[5136]: W0320 07:10:46.048415 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod960739f0_c4a5_49c6_8e2a_9452815cf1a9.slice/crio-c417f48a18610bbcb3a324c0dd0cc757f54ca7629176ea2441d7c501e41142ce WatchSource:0}: Error finding container c417f48a18610bbcb3a324c0dd0cc757f54ca7629176ea2441d7c501e41142ce: Status 404 returned error can't find the container with id c417f48a18610bbcb3a324c0dd0cc757f54ca7629176ea2441d7c501e41142ce Mar 20 07:10:46 crc kubenswrapper[5136]: I0320 07:10:46.519499 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.027441 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerStarted","Data":"22b2668fe332b62f7864af2d759b5866cf033333320267d52cb7cec04a426bd9"} Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.029840 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"960739f0-c4a5-49c6-8e2a-9452815cf1a9","Type":"ContainerStarted","Data":"c417f48a18610bbcb3a324c0dd0cc757f54ca7629176ea2441d7c501e41142ce"} Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.411445 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:10:47 crc kubenswrapper[5136]: W0320 07:10:47.418456 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23c10323_3c49_4f00_8bf7_319e6f5834d0.slice/crio-472fbef87977bdfc11603315d743a17729300016a4f32222d159ed871e8ca38d WatchSource:0}: Error finding container 472fbef87977bdfc11603315d743a17729300016a4f32222d159ed871e8ca38d: Status 404 returned error can't find the container with id 472fbef87977bdfc11603315d743a17729300016a4f32222d159ed871e8ca38d Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.654272 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:10:47 crc kubenswrapper[5136]: W0320 07:10:47.657077 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf624d46_ce35_4e7f_b463_4b0eba006ded.slice/crio-344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a WatchSource:0}: Error finding container 344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a: Status 404 returned error can't find the container with id 344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.727659 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.736510 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:10:47 crc kubenswrapper[5136]: E0320 07:10:47.759303 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" Mar 20 07:10:47 crc kubenswrapper[5136]: E0320 07:10:47.759485 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sw97g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-54b5dffb47-zd72x_openstack(571c2781-59c0-4345-9a04-09a51ceabc0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:10:47 crc kubenswrapper[5136]: E0320 07:10:47.760645 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" podUID="571c2781-59c0-4345-9a04-09a51ceabc0d" Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.959116 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:10:47 crc kubenswrapper[5136]: W0320 07:10:47.967618 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf872c575_a357_4b29_b5e8_cf5dbe6f3d7a.slice/crio-58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6 WatchSource:0}: Error finding container 58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6: Status 404 returned error can't find the container with id 58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6 Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.038132 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cf624d46-ce35-4e7f-b463-4b0eba006ded","Type":"ContainerStarted","Data":"344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a"} Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.039395 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerStarted","Data":"8182f12d4de26ad384abd8e2a3a9007acaaad7cd8b7e832cca1481d0c6ef89ef"} Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.040116 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6" event={"ID":"04ee32c0-35eb-488d-b166-0ad8a8d09f48","Type":"ContainerStarted","Data":"969e50d91cdce234e3ebd25af89de94a9345b9463c4d70197f2dbbaa911c914f"} Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.041002 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerStarted","Data":"58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6"} Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.042780 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerStarted","Data":"472fbef87977bdfc11603315d743a17729300016a4f32222d159ed871e8ca38d"} Mar 20 07:10:48 crc kubenswrapper[5136]: E0320 07:10:48.043484 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51\\\"\"" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" podUID="571c2781-59c0-4345-9a04-09a51ceabc0d" Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.435794 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.914551 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:10:49 crc kubenswrapper[5136]: I0320 07:10:49.054210 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerStarted","Data":"11e0a5791b54dfc64b5c868dfb4c7110fa55e59d3ea215d5dd89246b1feeb323"} Mar 20 07:10:49 crc kubenswrapper[5136]: I0320 07:10:49.055371 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerStarted","Data":"ecef44b4bd97cd40f7c1c2de9472cdb09460ec1aa1b9eb32b1b7e366da3578d0"} Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.113515 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.116729 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.118806 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.131038 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249108 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhlv\" (UniqueName: \"kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249163 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249205 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249228 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249264 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249286 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.265869 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.305638 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.308418 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.311964 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.324678 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369023 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhlv\" (UniqueName: \"kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369231 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369268 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369303 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.370500 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.370573 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.370864 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.376561 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.388698 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhlv\" (UniqueName: \"kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.405256 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.455582 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.470935 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.471277 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8ss\" (UniqueName: \"kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.471376 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.471452 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.545500 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.560448 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.564048 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.567109 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.574779 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8ss\" (UniqueName: \"kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.574896 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.574932 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.574971 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.576027 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.580361 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.580393 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.585034 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.605887 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8ss\" (UniqueName: \"kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.663090 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.672274 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.678020 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl9j4\" (UniqueName: \"kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.678234 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.678451 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.678535 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.678626 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.780623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc\") pod \"571c2781-59c0-4345-9a04-09a51ceabc0d\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.780943 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config\") pod \"571c2781-59c0-4345-9a04-09a51ceabc0d\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781023 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw97g\" (UniqueName: \"kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g\") pod \"571c2781-59c0-4345-9a04-09a51ceabc0d\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781334 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781359 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781440 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781514 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl9j4\" (UniqueName: \"kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781571 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781789 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "571c2781-59c0-4345-9a04-09a51ceabc0d" (UID: "571c2781-59c0-4345-9a04-09a51ceabc0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.782521 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config" (OuterVolumeSpecName: "config") pod "571c2781-59c0-4345-9a04-09a51ceabc0d" (UID: "571c2781-59c0-4345-9a04-09a51ceabc0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.782552 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.782616 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.783354 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.785880 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.788294 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g" (OuterVolumeSpecName: "kube-api-access-sw97g") pod "571c2781-59c0-4345-9a04-09a51ceabc0d" (UID: "571c2781-59c0-4345-9a04-09a51ceabc0d"). InnerVolumeSpecName "kube-api-access-sw97g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.802348 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl9j4\" (UniqueName: \"kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.883019 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw97g\" (UniqueName: \"kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.883105 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.883123 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.890899 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.988739 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:10:51 crc kubenswrapper[5136]: W0320 07:10:51.990526 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ede60bf_5bc5_4267_9849_9389df070048.slice/crio-96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2 WatchSource:0}: Error finding container 96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2: Status 404 returned error can't find the container with id 96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2 Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.083925 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.083972 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" event={"ID":"571c2781-59c0-4345-9a04-09a51ceabc0d","Type":"ContainerDied","Data":"603105be24fa6a6cb4daf90aa8e7faaf32594389e035752cfb2e9cf8a54926bd"} Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.086969 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vr74x" event={"ID":"0ede60bf-5bc5-4267-9849-9389df070048","Type":"ContainerStarted","Data":"96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2"} Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.103882 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:10:52 crc kubenswrapper[5136]: W0320 07:10:52.109550 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f0b6cee_c719_4ef8_a97a_f4ecbdac4e50.slice/crio-feddb6f7ff0c0085762a83a31ade90d04bd037a88d8fdb5ca4054aa0e23e524e WatchSource:0}: Error finding container feddb6f7ff0c0085762a83a31ade90d04bd037a88d8fdb5ca4054aa0e23e524e: Status 404 returned error can't find the container with id feddb6f7ff0c0085762a83a31ade90d04bd037a88d8fdb5ca4054aa0e23e524e Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.150562 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.156661 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.308528 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:10:52 crc kubenswrapper[5136]: W0320 07:10:52.315569 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde68a814_1b9a_4aad_9841_790f24b79e9e.slice/crio-2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559 WatchSource:0}: Error finding container 2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559: Status 404 returned error can't find the container with id 2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559 Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.405486 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="571c2781-59c0-4345-9a04-09a51ceabc0d" path="/var/lib/kubelet/pods/571c2781-59c0-4345-9a04-09a51ceabc0d/volumes" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.887672 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.888376 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rkmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-64696987c5-86bqj_openstack(1dd8ad22-4946-4d2c-b2cb-a38f42166c88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.889808 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-64696987c5-86bqj" podUID="1dd8ad22-4946-4d2c-b2cb-a38f42166c88" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.896980 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.897111 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ln7fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-854f47b4f9-rcppb_openstack(90e44514-0ddc-4151-ad00-cf458d5adf9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.898247 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" podUID="90e44514-0ddc-4151-ad00-cf458d5adf9e" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.905053 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.905178 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67tl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5448ff6dc7-qtnm5_openstack(89d4f1a0-0e10-49e6-98bc-43920e03caba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.906240 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" podUID="89d4f1a0-0e10-49e6-98bc-43920e03caba" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.130271 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" event={"ID":"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50","Type":"ContainerStarted","Data":"feddb6f7ff0c0085762a83a31ade90d04bd037a88d8fdb5ca4054aa0e23e524e"} Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.137345 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" event={"ID":"de68a814-1b9a-4aad-9841-790f24b79e9e","Type":"ContainerStarted","Data":"2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559"} Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.143118 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerStarted","Data":"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29"} Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.818349 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.828710 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.841881 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.919661 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln7fj\" (UniqueName: \"kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj\") pod \"90e44514-0ddc-4151-ad00-cf458d5adf9e\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920119 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67tl9\" (UniqueName: \"kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9\") pod \"89d4f1a0-0e10-49e6-98bc-43920e03caba\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920154 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config\") pod \"90e44514-0ddc-4151-ad00-cf458d5adf9e\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920232 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config\") pod \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920264 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rkmb\" (UniqueName: \"kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb\") pod \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920282 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc\") pod \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920304 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc\") pod \"90e44514-0ddc-4151-ad00-cf458d5adf9e\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920337 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config\") pod \"89d4f1a0-0e10-49e6-98bc-43920e03caba\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920954 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dd8ad22-4946-4d2c-b2cb-a38f42166c88" (UID: "1dd8ad22-4946-4d2c-b2cb-a38f42166c88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config" (OuterVolumeSpecName: "config") pod "1dd8ad22-4946-4d2c-b2cb-a38f42166c88" (UID: "1dd8ad22-4946-4d2c-b2cb-a38f42166c88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.921843 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90e44514-0ddc-4151-ad00-cf458d5adf9e" (UID: "90e44514-0ddc-4151-ad00-cf458d5adf9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.922129 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config" (OuterVolumeSpecName: "config") pod "89d4f1a0-0e10-49e6-98bc-43920e03caba" (UID: "89d4f1a0-0e10-49e6-98bc-43920e03caba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.922511 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.922575 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.922585 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.922593 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.923517 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config" (OuterVolumeSpecName: "config") pod "90e44514-0ddc-4151-ad00-cf458d5adf9e" (UID: "90e44514-0ddc-4151-ad00-cf458d5adf9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.923867 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj" (OuterVolumeSpecName: "kube-api-access-ln7fj") pod "90e44514-0ddc-4151-ad00-cf458d5adf9e" (UID: "90e44514-0ddc-4151-ad00-cf458d5adf9e"). InnerVolumeSpecName "kube-api-access-ln7fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.924111 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9" (OuterVolumeSpecName: "kube-api-access-67tl9") pod "89d4f1a0-0e10-49e6-98bc-43920e03caba" (UID: "89d4f1a0-0e10-49e6-98bc-43920e03caba"). InnerVolumeSpecName "kube-api-access-67tl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.926512 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb" (OuterVolumeSpecName: "kube-api-access-9rkmb") pod "1dd8ad22-4946-4d2c-b2cb-a38f42166c88" (UID: "1dd8ad22-4946-4d2c-b2cb-a38f42166c88"). InnerVolumeSpecName "kube-api-access-9rkmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.024171 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rkmb\" (UniqueName: \"kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.024220 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln7fj\" (UniqueName: \"kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.024230 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67tl9\" (UniqueName: \"kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.024293 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.152835 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.152832 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" event={"ID":"89d4f1a0-0e10-49e6-98bc-43920e03caba","Type":"ContainerDied","Data":"b1739b4168d54cbd082e621d7a6b119af2a011d7e88735613ee365c8955655aa"} Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.155334 5136 generic.go:334] "Generic (PLEG): container finished" podID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerID="ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175" exitCode=0 Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.155398 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" event={"ID":"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50","Type":"ContainerDied","Data":"ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175"} Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.157241 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" event={"ID":"90e44514-0ddc-4151-ad00-cf458d5adf9e","Type":"ContainerDied","Data":"a835263469ad6ded88770537144f67af862b9ca0e4d044183f6bbb2b8ff9cb68"} Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.157316 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.161193 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerStarted","Data":"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71"} Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.162856 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.162899 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64696987c5-86bqj" event={"ID":"1dd8ad22-4946-4d2c-b2cb-a38f42166c88","Type":"ContainerDied","Data":"0124fdc9a4d5b014a0d0715276e212228f8d98c3844a190558ceec06845de8bc"} Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.316584 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.337848 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.353632 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.361690 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.379849 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.429202 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d4f1a0-0e10-49e6-98bc-43920e03caba" path="/var/lib/kubelet/pods/89d4f1a0-0e10-49e6-98bc-43920e03caba/volumes" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.432296 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e44514-0ddc-4151-ad00-cf458d5adf9e" path="/var/lib/kubelet/pods/90e44514-0ddc-4151-ad00-cf458d5adf9e/volumes" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.432759 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:56 crc kubenswrapper[5136]: I0320 07:10:56.405807 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd8ad22-4946-4d2c-b2cb-a38f42166c88" path="/var/lib/kubelet/pods/1dd8ad22-4946-4d2c-b2cb-a38f42166c88/volumes" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.218769 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerStarted","Data":"7f81f78f97fc5d48f48b6354b794c050f707e5b35fc6d46c7df2de9e4878960b"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.220774 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" event={"ID":"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50","Type":"ContainerStarted","Data":"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.221718 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.226729 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerStarted","Data":"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.227855 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerStarted","Data":"c02c0e7e0e6b0a33a002d424a3ac60fcdca9be308ea7764e0da41ae85bb3639a"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.230868 5136 generic.go:334] "Generic (PLEG): container finished" podID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerID="34ee2cccbe30631969d3aa93a1b8264849d8d5334e0c97572f21e0a6e95e8e26" exitCode=0 Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.230932 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" event={"ID":"de68a814-1b9a-4aad-9841-790f24b79e9e","Type":"ContainerDied","Data":"34ee2cccbe30631969d3aa93a1b8264849d8d5334e0c97572f21e0a6e95e8e26"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.236989 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vr74x" event={"ID":"0ede60bf-5bc5-4267-9849-9389df070048","Type":"ContainerStarted","Data":"c83952221ac9ae15d237b01aa417d2a8651bd6786c0034250cebe0e17be31690"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.239454 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"960739f0-c4a5-49c6-8e2a-9452815cf1a9","Type":"ContainerStarted","Data":"184e304c1ac08ec0deea0a800adeaeabbaf3a333a8f4d43b893a721e45afd9b7"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.239535 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.240053 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" podStartSLOduration=10.332007374 podStartE2EDuration="11.240034565s" podCreationTimestamp="2026-03-20 07:10:51 +0000 UTC" firstStartedPulling="2026-03-20 07:10:52.112386751 +0000 UTC m=+1284.371697902" lastFinishedPulling="2026-03-20 07:10:53.020413942 +0000 UTC m=+1285.279725093" observedRunningTime="2026-03-20 07:11:02.23557288 +0000 UTC m=+1294.494884031" watchObservedRunningTime="2026-03-20 07:11:02.240034565 +0000 UTC m=+1294.499345716" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.241776 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cf624d46-ce35-4e7f-b463-4b0eba006ded","Type":"ContainerStarted","Data":"0a42176f6839fd2b1fa46f8a90c2d73b4c4eaa11385cb9c81bf9e24e01ecf323"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.241995 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.245207 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerStarted","Data":"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.248063 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerStarted","Data":"efcc419ead7f776e9a762552e20519a145846b32963d5cf946e1216d1b57e9d3"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.250315 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6" event={"ID":"04ee32c0-35eb-488d-b166-0ad8a8d09f48","Type":"ContainerStarted","Data":"a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.250792 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gnwt6" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.331510 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.313012844 podStartE2EDuration="27.33148832s" podCreationTimestamp="2026-03-20 07:10:35 +0000 UTC" firstStartedPulling="2026-03-20 07:10:47.659095023 +0000 UTC m=+1279.918406174" lastFinishedPulling="2026-03-20 07:11:01.677570509 +0000 UTC m=+1293.936881650" observedRunningTime="2026-03-20 07:11:02.313041711 +0000 UTC m=+1294.572352852" watchObservedRunningTime="2026-03-20 07:11:02.33148832 +0000 UTC m=+1294.590799471" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.336807 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gnwt6" podStartSLOduration=10.088360372 podStartE2EDuration="23.336790842s" podCreationTimestamp="2026-03-20 07:10:39 +0000 UTC" firstStartedPulling="2026-03-20 07:10:47.718090143 +0000 UTC m=+1279.977401294" lastFinishedPulling="2026-03-20 07:11:00.966520583 +0000 UTC m=+1293.225831764" observedRunningTime="2026-03-20 07:11:02.325506559 +0000 UTC m=+1294.584817710" watchObservedRunningTime="2026-03-20 07:11:02.336790842 +0000 UTC m=+1294.596101993" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.344275 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.151644978 podStartE2EDuration="29.344262178s" podCreationTimestamp="2026-03-20 07:10:33 +0000 UTC" firstStartedPulling="2026-03-20 07:10:46.075282105 +0000 UTC m=+1278.334593276" lastFinishedPulling="2026-03-20 07:11:00.267899325 +0000 UTC m=+1292.527210476" observedRunningTime="2026-03-20 07:11:02.341798343 +0000 UTC m=+1294.601109494" watchObservedRunningTime="2026-03-20 07:11:02.344262178 +0000 UTC m=+1294.603573329" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.263749 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerStarted","Data":"55e70c80be714d08791bbb875a2885eb056808546361bafa1ce59b4a2b4afd94"} Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.267423 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerStarted","Data":"ad85344499cd2c34ea152d61f6efd8d5a2edf8814c85572a74d76108d11d3655"} Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.272342 5136 generic.go:334] "Generic (PLEG): container finished" podID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerID="251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56" exitCode=0 Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.272414 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerDied","Data":"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56"} Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.276127 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" event={"ID":"de68a814-1b9a-4aad-9841-790f24b79e9e","Type":"ContainerStarted","Data":"bd48417c8a8842903b86c0b0297625af601775c012346c6e4a42ced3c9d81a5c"} Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.276173 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.316174 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.099144412 podStartE2EDuration="23.316148489s" podCreationTimestamp="2026-03-20 07:10:40 +0000 UTC" firstStartedPulling="2026-03-20 07:10:47.971200163 +0000 UTC m=+1280.230511314" lastFinishedPulling="2026-03-20 07:11:01.18820423 +0000 UTC m=+1293.447515391" observedRunningTime="2026-03-20 07:11:03.294227724 +0000 UTC m=+1295.553538915" watchObservedRunningTime="2026-03-20 07:11:03.316148489 +0000 UTC m=+1295.575459660" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.346359 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vr74x" podStartSLOduration=2.702363294 podStartE2EDuration="12.346338234s" podCreationTimestamp="2026-03-20 07:10:51 +0000 UTC" firstStartedPulling="2026-03-20 07:10:51.992498102 +0000 UTC m=+1284.251809253" lastFinishedPulling="2026-03-20 07:11:01.636473032 +0000 UTC m=+1293.895784193" observedRunningTime="2026-03-20 07:11:03.327834913 +0000 UTC m=+1295.587146074" watchObservedRunningTime="2026-03-20 07:11:03.346338234 +0000 UTC m=+1295.605649405" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.357938 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" podStartSLOduration=11.036382686 podStartE2EDuration="12.357912496s" podCreationTimestamp="2026-03-20 07:10:51 +0000 UTC" firstStartedPulling="2026-03-20 07:10:52.317738041 +0000 UTC m=+1284.577049192" lastFinishedPulling="2026-03-20 07:10:53.639267851 +0000 UTC m=+1285.898579002" observedRunningTime="2026-03-20 07:11:03.350709827 +0000 UTC m=+1295.610020998" watchObservedRunningTime="2026-03-20 07:11:03.357912496 +0000 UTC m=+1295.617223657" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.383978 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.629790513 podStartE2EDuration="25.383961735s" podCreationTimestamp="2026-03-20 07:10:38 +0000 UTC" firstStartedPulling="2026-03-20 07:10:48.433068708 +0000 UTC m=+1280.692379859" lastFinishedPulling="2026-03-20 07:11:01.18723993 +0000 UTC m=+1293.446551081" observedRunningTime="2026-03-20 07:11:03.382194772 +0000 UTC m=+1295.641505963" watchObservedRunningTime="2026-03-20 07:11:03.383961735 +0000 UTC m=+1295.643272896" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.688142 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.286240 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerStarted","Data":"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c"} Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.288362 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerStarted","Data":"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58"} Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.325487 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ldp4w" podStartSLOduration=13.486604965 podStartE2EDuration="25.325460094s" podCreationTimestamp="2026-03-20 07:10:39 +0000 UTC" firstStartedPulling="2026-03-20 07:10:48.92353708 +0000 UTC m=+1281.182848241" lastFinishedPulling="2026-03-20 07:11:00.762392229 +0000 UTC m=+1293.021703370" observedRunningTime="2026-03-20 07:11:04.308086306 +0000 UTC m=+1296.567397467" watchObservedRunningTime="2026-03-20 07:11:04.325460094 +0000 UTC m=+1296.584771245" Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.689756 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.777595 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.777650 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:11:05 crc kubenswrapper[5136]: I0320 07:11:05.981004 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 20 07:11:06 crc kubenswrapper[5136]: I0320 07:11:06.046917 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 20 07:11:06 crc kubenswrapper[5136]: I0320 07:11:06.312316 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 20 07:11:06 crc kubenswrapper[5136]: I0320 07:11:06.356378 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 20 07:11:06 crc kubenswrapper[5136]: I0320 07:11:06.725918 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.318291 5136 generic.go:334] "Generic (PLEG): container finished" podID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerID="1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080" exitCode=0 Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.318351 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerDied","Data":"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080"} Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.320977 5136 generic.go:334] "Generic (PLEG): container finished" podID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerID="efcc419ead7f776e9a762552e20519a145846b32963d5cf946e1216d1b57e9d3" exitCode=0 Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.321002 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerDied","Data":"efcc419ead7f776e9a762552e20519a145846b32963d5cf946e1216d1b57e9d3"} Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.409526 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.556626 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.562108 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.565113 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.569704 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.587153 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.587339 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-bcbdk" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.587365 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594680 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594739 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594761 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594859 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594928 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594975 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdvcz\" (UniqueName: \"kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594995 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697129 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697207 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697244 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdvcz\" (UniqueName: \"kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697268 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697323 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697342 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697360 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.699663 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.699726 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.700902 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.712795 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.713295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.713722 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.716926 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdvcz\" (UniqueName: \"kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.906472 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.151714 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:11:08 crc kubenswrapper[5136]: W0320 07:11:08.158284 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7acbc76f_ff83_451e_826f_5fd1f977f74f.slice/crio-f77fa3b0a75190383cf99cb089377cd7d03639ec4bda09b8550cc55a30016174 WatchSource:0}: Error finding container f77fa3b0a75190383cf99cb089377cd7d03639ec4bda09b8550cc55a30016174: Status 404 returned error can't find the container with id f77fa3b0a75190383cf99cb089377cd7d03639ec4bda09b8550cc55a30016174 Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.330202 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerStarted","Data":"2f108d33fa61f242b6db0d1497e869fa649b09a841ba1ec8a5200036f1da6f44"} Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.331469 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerStarted","Data":"f77fa3b0a75190383cf99cb089377cd7d03639ec4bda09b8550cc55a30016174"} Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.333559 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerStarted","Data":"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2"} Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.353760 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.952991222 podStartE2EDuration="37.353739624s" podCreationTimestamp="2026-03-20 07:10:31 +0000 UTC" firstStartedPulling="2026-03-20 07:10:47.717429173 +0000 UTC m=+1279.976740324" lastFinishedPulling="2026-03-20 07:11:01.118177575 +0000 UTC m=+1293.377488726" observedRunningTime="2026-03-20 07:11:08.350063644 +0000 UTC m=+1300.609374815" watchObservedRunningTime="2026-03-20 07:11:08.353739624 +0000 UTC m=+1300.613050775" Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.373976 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=24.251966962 podStartE2EDuration="38.373960728s" podCreationTimestamp="2026-03-20 07:10:30 +0000 UTC" firstStartedPulling="2026-03-20 07:10:47.420215615 +0000 UTC m=+1279.679526766" lastFinishedPulling="2026-03-20 07:11:01.542209371 +0000 UTC m=+1293.801520532" observedRunningTime="2026-03-20 07:11:08.368545304 +0000 UTC m=+1300.627856455" watchObservedRunningTime="2026-03-20 07:11:08.373960728 +0000 UTC m=+1300.633271879" Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.686065 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 20 07:11:10 crc kubenswrapper[5136]: E0320 07:11:10.164827 5136 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.163:42736->38.102.83.163:37797: write tcp 38.102.83.163:42736->38.102.83.163:37797: write: connection reset by peer Mar 20 07:11:10 crc kubenswrapper[5136]: I0320 07:11:10.353730 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerStarted","Data":"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf"} Mar 20 07:11:10 crc kubenswrapper[5136]: I0320 07:11:10.353990 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerStarted","Data":"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242"} Mar 20 07:11:10 crc kubenswrapper[5136]: I0320 07:11:10.354182 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 20 07:11:10 crc kubenswrapper[5136]: I0320 07:11:10.371859 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.2593994840000002 podStartE2EDuration="3.37183822s" podCreationTimestamp="2026-03-20 07:11:07 +0000 UTC" firstStartedPulling="2026-03-20 07:11:08.161034137 +0000 UTC m=+1300.420345288" lastFinishedPulling="2026-03-20 07:11:09.273472863 +0000 UTC m=+1301.532784024" observedRunningTime="2026-03-20 07:11:10.370022065 +0000 UTC m=+1302.629333246" watchObservedRunningTime="2026-03-20 07:11:10.37183822 +0000 UTC m=+1302.631149371" Mar 20 07:11:11 crc kubenswrapper[5136]: I0320 07:11:11.675067 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:11:11 crc kubenswrapper[5136]: I0320 07:11:11.815749 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 20 07:11:11 crc kubenswrapper[5136]: I0320 07:11:11.816113 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 20 07:11:11 crc kubenswrapper[5136]: I0320 07:11:11.892720 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:11:11 crc kubenswrapper[5136]: I0320 07:11:11.942627 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.150305 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.367700 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="dnsmasq-dns" containerID="cri-o://73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3" gracePeriod=10 Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.436739 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.805264 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.892894 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb\") pod \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.893024 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc\") pod \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.893068 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config\") pod \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.893151 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th8ss\" (UniqueName: \"kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss\") pod \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.901216 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss" (OuterVolumeSpecName: "kube-api-access-th8ss") pod "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" (UID: "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50"). InnerVolumeSpecName "kube-api-access-th8ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.934310 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config" (OuterVolumeSpecName: "config") pod "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" (UID: "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.943355 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" (UID: "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.950252 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" (UID: "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.995358 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.995393 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.995406 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.995418 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th8ss\" (UniqueName: \"kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.291986 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.292070 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.376429 5136 generic.go:334] "Generic (PLEG): container finished" podID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerID="73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3" exitCode=0 Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.376476 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.376510 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" event={"ID":"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50","Type":"ContainerDied","Data":"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3"} Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.376541 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" event={"ID":"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50","Type":"ContainerDied","Data":"feddb6f7ff0c0085762a83a31ade90d04bd037a88d8fdb5ca4054aa0e23e524e"} Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.376556 5136 scope.go:117] "RemoveContainer" containerID="73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.406619 5136 scope.go:117] "RemoveContainer" containerID="ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.414791 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.425246 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.437696 5136 scope.go:117] "RemoveContainer" containerID="73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3" Mar 20 07:11:13 crc kubenswrapper[5136]: E0320 07:11:13.438412 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3\": container with ID starting with 73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3 not found: ID does not exist" containerID="73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.438459 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3"} err="failed to get container status \"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3\": rpc error: code = NotFound desc = could not find container \"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3\": container with ID starting with 73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3 not found: ID does not exist" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.438484 5136 scope.go:117] "RemoveContainer" containerID="ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175" Mar 20 07:11:13 crc kubenswrapper[5136]: E0320 07:11:13.438986 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175\": container with ID starting with ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175 not found: ID does not exist" containerID="ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.439029 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175"} err="failed to get container status \"ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175\": rpc error: code = NotFound desc = could not find container \"ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175\": container with ID starting with ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175 not found: ID does not exist" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.407330 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" path="/var/lib/kubelet/pods/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50/volumes" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.699568 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e762-account-create-update-5vpcp"] Mar 20 07:11:14 crc kubenswrapper[5136]: E0320 07:11:14.700164 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="dnsmasq-dns" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.700186 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="dnsmasq-dns" Mar 20 07:11:14 crc kubenswrapper[5136]: E0320 07:11:14.700218 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="init" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.700225 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="init" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.700436 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="dnsmasq-dns" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.701035 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.702916 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.721022 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e762-account-create-update-5vpcp"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.747574 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-kfc9f"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.748531 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.757798 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kfc9f"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.831862 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.831921 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ghp\" (UniqueName: \"kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.831962 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zvb\" (UniqueName: \"kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.832033 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.839576 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-bk75j"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.840413 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bk75j" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.849772 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bk75j"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933315 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ghp\" (UniqueName: \"kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933385 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57zvb\" (UniqueName: \"kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933449 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5jxw\" (UniqueName: \"kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933490 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933525 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933581 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.934528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.934538 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.951997 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a0f6-account-create-update-c9hl7"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.952979 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.958034 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.958371 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57zvb\" (UniqueName: \"kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.963406 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ghp\" (UniqueName: \"kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.970032 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a0f6-account-create-update-c9hl7"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.019308 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.036666 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.036781 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5jxw\" (UniqueName: \"kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.036894 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkkrz\" (UniqueName: \"kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.036990 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.048421 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.062713 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5jxw\" (UniqueName: \"kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.064772 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.138935 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.138972 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkkrz\" (UniqueName: \"kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.140224 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.153831 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bk75j" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.158363 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkkrz\" (UniqueName: \"kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.327602 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.397210 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kfc9f"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.477062 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e762-account-create-update-5vpcp"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.668465 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bk75j"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.736113 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.737268 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.754088 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.780468 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.824162 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.824237 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.887514 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.887902 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.887956 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.888084 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wch7g\" (UniqueName: \"kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.888167 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.899209 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a0f6-account-create-update-c9hl7"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.949094 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.999253 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.999287 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.999310 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.999355 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wch7g\" (UniqueName: \"kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.999386 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.000171 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.000653 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.001168 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.001633 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.041135 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wch7g\" (UniqueName: \"kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.066383 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.096012 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.408946 5136 generic.go:334] "Generic (PLEG): container finished" podID="b4e39c5d-af98-44d6-a06d-f31555db758b" containerID="933fdd395d96426dd2696ed053dd4cefada8c95df3be0a52f3cc68ad68f9aebb" exitCode=0 Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.408998 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kfc9f" event={"ID":"b4e39c5d-af98-44d6-a06d-f31555db758b","Type":"ContainerDied","Data":"933fdd395d96426dd2696ed053dd4cefada8c95df3be0a52f3cc68ad68f9aebb"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.409255 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kfc9f" event={"ID":"b4e39c5d-af98-44d6-a06d-f31555db758b","Type":"ContainerStarted","Data":"91a625bb9304a7f8b51faab35b5fec61175c95ec85e746ddaca0e50d60fc3071"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.411252 5136 generic.go:334] "Generic (PLEG): container finished" podID="0954a67c-5522-4338-b9e6-fc1b35b48cdb" containerID="7056c10c02d573c52be9cb6646cfd2016f281214c76d5613dade95a4d450b824" exitCode=0 Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.411330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e762-account-create-update-5vpcp" event={"ID":"0954a67c-5522-4338-b9e6-fc1b35b48cdb","Type":"ContainerDied","Data":"7056c10c02d573c52be9cb6646cfd2016f281214c76d5613dade95a4d450b824"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.411362 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e762-account-create-update-5vpcp" event={"ID":"0954a67c-5522-4338-b9e6-fc1b35b48cdb","Type":"ContainerStarted","Data":"48512ac7078a277888ee056a62b4a6b84197b81b5d0d2cb3c783d1abfeaec964"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.417677 5136 generic.go:334] "Generic (PLEG): container finished" podID="4a15871b-0fd2-4db9-a42a-8e822efa35fb" containerID="f8f2b333bca19081fee1627c5e046485a6793b7781e892f02c6a8b08ca392e57" exitCode=0 Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.417722 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bk75j" event={"ID":"4a15871b-0fd2-4db9-a42a-8e822efa35fb","Type":"ContainerDied","Data":"f8f2b333bca19081fee1627c5e046485a6793b7781e892f02c6a8b08ca392e57"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.417768 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bk75j" event={"ID":"4a15871b-0fd2-4db9-a42a-8e822efa35fb","Type":"ContainerStarted","Data":"cec4a6950888591d2e5df3e8e1f2226263c066e2869ac680a5e656395d2183a3"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.419738 5136 generic.go:334] "Generic (PLEG): container finished" podID="81055905-a498-49a7-917a-2032a292710e" containerID="ca4d6aff6fa4147c69ade98576093b5726d3ffc5a53c4a7f48a1261885cf9eaf" exitCode=0 Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.419792 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0f6-account-create-update-c9hl7" event={"ID":"81055905-a498-49a7-917a-2032a292710e","Type":"ContainerDied","Data":"ca4d6aff6fa4147c69ade98576093b5726d3ffc5a53c4a7f48a1261885cf9eaf"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.419816 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0f6-account-create-update-c9hl7" event={"ID":"81055905-a498-49a7-917a-2032a292710e","Type":"ContainerStarted","Data":"b0ee3bca0a6e172f1f5ecea08b36c9a75b540d345edc1a90ca2b72e664d06260"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.550914 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:11:16 crc kubenswrapper[5136]: W0320 07:11:16.553385 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd103abed_83b7_44e9_bc7f_786434426647.slice/crio-5e9c9cbaa5f048deb6cd641b68b15e9585b27cf0df0654e28450ce6a42c152c0 WatchSource:0}: Error finding container 5e9c9cbaa5f048deb6cd641b68b15e9585b27cf0df0654e28450ce6a42c152c0: Status 404 returned error can't find the container with id 5e9c9cbaa5f048deb6cd641b68b15e9585b27cf0df0654e28450ce6a42c152c0 Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.941807 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.949393 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.951037 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-g9cz6" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.951956 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.951993 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.952989 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.970207 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015291 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77dwk\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015364 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015393 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015426 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015452 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015514 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77dwk\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117470 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117494 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117521 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117542 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117570 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.118142 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.118150 5136 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.118202 5136 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.118289 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift podName:dd944fb6-1517-4f5b-b579-79d8f1f3da19 nodeName:}" failed. No retries permitted until 2026-03-20 07:11:17.618261307 +0000 UTC m=+1309.877572488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift") pod "swift-storage-0" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19") : configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.118338 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.118350 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.131168 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.150193 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.154407 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77dwk\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.428516 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-v7xvp"] Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.430109 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.433348 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.433349 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.433378 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.435215 5136 generic.go:334] "Generic (PLEG): container finished" podID="d103abed-83b7-44e9-bc7f-786434426647" containerID="533f92371dee2235f15d0d84ab9f13da275c7e919c6618b46cdf3ab8345571a9" exitCode=0 Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.436344 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" event={"ID":"d103abed-83b7-44e9-bc7f-786434426647","Type":"ContainerDied","Data":"533f92371dee2235f15d0d84ab9f13da275c7e919c6618b46cdf3ab8345571a9"} Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.436375 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" event={"ID":"d103abed-83b7-44e9-bc7f-786434426647","Type":"ContainerStarted","Data":"5e9c9cbaa5f048deb6cd641b68b15e9585b27cf0df0654e28450ce6a42c152c0"} Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.455506 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v7xvp"] Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543524 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543598 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543665 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543741 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtknk\" (UniqueName: \"kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543777 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543801 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.645707 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.645823 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.645887 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.645941 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.645989 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.646040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.646095 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtknk\" (UniqueName: \"kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.646146 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.646180 5136 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.646205 5136 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.646260 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift podName:dd944fb6-1517-4f5b-b579-79d8f1f3da19 nodeName:}" failed. No retries permitted until 2026-03-20 07:11:18.646241288 +0000 UTC m=+1310.905552439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift") pod "swift-storage-0" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19") : configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.646470 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.646532 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.648388 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.649544 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.650065 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.650388 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.662552 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtknk\" (UniqueName: \"kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.795779 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.812345 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.951669 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57zvb\" (UniqueName: \"kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb\") pod \"b4e39c5d-af98-44d6-a06d-f31555db758b\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.952102 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts\") pod \"b4e39c5d-af98-44d6-a06d-f31555db758b\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.954487 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4e39c5d-af98-44d6-a06d-f31555db758b" (UID: "b4e39c5d-af98-44d6-a06d-f31555db758b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.991120 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb" (OuterVolumeSpecName: "kube-api-access-57zvb") pod "b4e39c5d-af98-44d6-a06d-f31555db758b" (UID: "b4e39c5d-af98-44d6-a06d-f31555db758b"). InnerVolumeSpecName "kube-api-access-57zvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.054511 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57zvb\" (UniqueName: \"kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.054543 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.092683 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.099505 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bk75j" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.104753 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155614 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkkrz\" (UniqueName: \"kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz\") pod \"81055905-a498-49a7-917a-2032a292710e\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155699 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts\") pod \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155763 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts\") pod \"81055905-a498-49a7-917a-2032a292710e\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155812 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts\") pod \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155922 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ghp\" (UniqueName: \"kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp\") pod \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155964 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5jxw\" (UniqueName: \"kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw\") pod \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156256 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0954a67c-5522-4338-b9e6-fc1b35b48cdb" (UID: "0954a67c-5522-4338-b9e6-fc1b35b48cdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156270 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81055905-a498-49a7-917a-2032a292710e" (UID: "81055905-a498-49a7-917a-2032a292710e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156358 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a15871b-0fd2-4db9-a42a-8e822efa35fb" (UID: "4a15871b-0fd2-4db9-a42a-8e822efa35fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156500 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156517 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156527 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.159620 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz" (OuterVolumeSpecName: "kube-api-access-fkkrz") pod "81055905-a498-49a7-917a-2032a292710e" (UID: "81055905-a498-49a7-917a-2032a292710e"). InnerVolumeSpecName "kube-api-access-fkkrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.176114 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw" (OuterVolumeSpecName: "kube-api-access-r5jxw") pod "4a15871b-0fd2-4db9-a42a-8e822efa35fb" (UID: "4a15871b-0fd2-4db9-a42a-8e822efa35fb"). InnerVolumeSpecName "kube-api-access-r5jxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.176303 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp" (OuterVolumeSpecName: "kube-api-access-z4ghp") pod "0954a67c-5522-4338-b9e6-fc1b35b48cdb" (UID: "0954a67c-5522-4338-b9e6-fc1b35b48cdb"). InnerVolumeSpecName "kube-api-access-z4ghp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.258267 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkkrz\" (UniqueName: \"kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.258322 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ghp\" (UniqueName: \"kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.258332 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5jxw\" (UniqueName: \"kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.305079 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v7xvp"] Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.444117 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e762-account-create-update-5vpcp" event={"ID":"0954a67c-5522-4338-b9e6-fc1b35b48cdb","Type":"ContainerDied","Data":"48512ac7078a277888ee056a62b4a6b84197b81b5d0d2cb3c783d1abfeaec964"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.444157 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48512ac7078a277888ee056a62b4a6b84197b81b5d0d2cb3c783d1abfeaec964" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.444206 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.446373 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bk75j" event={"ID":"4a15871b-0fd2-4db9-a42a-8e822efa35fb","Type":"ContainerDied","Data":"cec4a6950888591d2e5df3e8e1f2226263c066e2869ac680a5e656395d2183a3"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.446793 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cec4a6950888591d2e5df3e8e1f2226263c066e2869ac680a5e656395d2183a3" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.446414 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bk75j" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.448109 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.448451 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0f6-account-create-update-c9hl7" event={"ID":"81055905-a498-49a7-917a-2032a292710e","Type":"ContainerDied","Data":"b0ee3bca0a6e172f1f5ecea08b36c9a75b540d345edc1a90ca2b72e664d06260"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.448626 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ee3bca0a6e172f1f5ecea08b36c9a75b540d345edc1a90ca2b72e664d06260" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.452926 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" event={"ID":"d103abed-83b7-44e9-bc7f-786434426647","Type":"ContainerStarted","Data":"604b652a792660f1238e2607b4242155d6fa3281d34ce55590b668cd26222f1b"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.454230 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.455912 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kfc9f" event={"ID":"b4e39c5d-af98-44d6-a06d-f31555db758b","Type":"ContainerDied","Data":"91a625bb9304a7f8b51faab35b5fec61175c95ec85e746ddaca0e50d60fc3071"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.455950 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a625bb9304a7f8b51faab35b5fec61175c95ec85e746ddaca0e50d60fc3071" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.456040 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.457622 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7xvp" event={"ID":"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60","Type":"ContainerStarted","Data":"158b8904c559e5367b1f3b8f9dd4746bcb9987780df1531c47db70ae775f7d6f"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.479992 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" podStartSLOduration=3.479967915 podStartE2EDuration="3.479967915s" podCreationTimestamp="2026-03-20 07:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:11:18.473789207 +0000 UTC m=+1310.733100368" watchObservedRunningTime="2026-03-20 07:11:18.479967915 +0000 UTC m=+1310.739279066" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.664780 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.665012 5136 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.665045 5136 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.665112 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift podName:dd944fb6-1517-4f5b-b579-79d8f1f3da19 nodeName:}" failed. No retries permitted until 2026-03-20 07:11:20.665089152 +0000 UTC m=+1312.924400313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift") pod "swift-storage-0" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19") : configmap "swift-ring-files" not found Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805095 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-c6tbf"] Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.805501 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a15871b-0fd2-4db9-a42a-8e822efa35fb" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805520 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a15871b-0fd2-4db9-a42a-8e822efa35fb" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.805535 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0954a67c-5522-4338-b9e6-fc1b35b48cdb" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805544 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0954a67c-5522-4338-b9e6-fc1b35b48cdb" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.805555 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e39c5d-af98-44d6-a06d-f31555db758b" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805562 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e39c5d-af98-44d6-a06d-f31555db758b" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.805577 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81055905-a498-49a7-917a-2032a292710e" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805590 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="81055905-a498-49a7-917a-2032a292710e" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805762 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e39c5d-af98-44d6-a06d-f31555db758b" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805785 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="81055905-a498-49a7-917a-2032a292710e" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805805 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a15871b-0fd2-4db9-a42a-8e822efa35fb" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805843 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0954a67c-5522-4338-b9e6-fc1b35b48cdb" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.806378 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.818553 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c6tbf"] Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.868484 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzrhd\" (UniqueName: \"kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.868562 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.917613 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a033-account-create-update-ww8m7"] Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.918616 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.925235 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.936267 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a033-account-create-update-ww8m7"] Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.970115 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzrhd\" (UniqueName: \"kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.970192 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.970245 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.970288 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gj8w\" (UniqueName: \"kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.971219 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.988365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzrhd\" (UniqueName: \"kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.071981 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.072045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gj8w\" (UniqueName: \"kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.072591 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.097535 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gj8w\" (UniqueName: \"kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.131004 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.232278 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.597857 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c6tbf"] Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.729569 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a033-account-create-update-ww8m7"] Mar 20 07:11:19 crc kubenswrapper[5136]: W0320 07:11:19.732455 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52702304_46c3_4028_af56_60e936dea0a9.slice/crio-81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f WatchSource:0}: Error finding container 81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f: Status 404 returned error can't find the container with id 81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.476145 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vrh5d"] Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.477466 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.480997 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.491908 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vrh5d"] Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.499713 5136 generic.go:334] "Generic (PLEG): container finished" podID="744eb619-4231-474c-a8b2-a37ed7432086" containerID="61abc8440208cd19caa61d866cd42cc249d0d527cfebb488be887ccce4bdea72" exitCode=0 Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.499771 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c6tbf" event={"ID":"744eb619-4231-474c-a8b2-a37ed7432086","Type":"ContainerDied","Data":"61abc8440208cd19caa61d866cd42cc249d0d527cfebb488be887ccce4bdea72"} Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.499804 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c6tbf" event={"ID":"744eb619-4231-474c-a8b2-a37ed7432086","Type":"ContainerStarted","Data":"f5b4c0167edf314cdc0984000087c4b3f5860a0e137fe9285c5122f8416a493d"} Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.504149 5136 generic.go:334] "Generic (PLEG): container finished" podID="52702304-46c3-4028-af56-60e936dea0a9" containerID="dc6f042f4a1f3f8ba50fa65cef930cd8040f1e880b0843b1b3beecf9065681fb" exitCode=0 Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.504680 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a033-account-create-update-ww8m7" event={"ID":"52702304-46c3-4028-af56-60e936dea0a9","Type":"ContainerDied","Data":"dc6f042f4a1f3f8ba50fa65cef930cd8040f1e880b0843b1b3beecf9065681fb"} Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.504700 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a033-account-create-update-ww8m7" event={"ID":"52702304-46c3-4028-af56-60e936dea0a9","Type":"ContainerStarted","Data":"81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f"} Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.599949 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.600002 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5crp\" (UniqueName: \"kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.702316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.702409 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.702481 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5crp\" (UniqueName: \"kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.703289 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: E0320 07:11:20.703419 5136 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 07:11:20 crc kubenswrapper[5136]: E0320 07:11:20.703443 5136 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 07:11:20 crc kubenswrapper[5136]: E0320 07:11:20.703502 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift podName:dd944fb6-1517-4f5b-b579-79d8f1f3da19 nodeName:}" failed. No retries permitted until 2026-03-20 07:11:24.703486184 +0000 UTC m=+1316.962797335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift") pod "swift-storage-0" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19") : configmap "swift-ring-files" not found Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.722877 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5crp\" (UniqueName: \"kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.802607 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.478231 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.485015 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.524669 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c6tbf" event={"ID":"744eb619-4231-474c-a8b2-a37ed7432086","Type":"ContainerDied","Data":"f5b4c0167edf314cdc0984000087c4b3f5860a0e137fe9285c5122f8416a493d"} Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.524730 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b4c0167edf314cdc0984000087c4b3f5860a0e137fe9285c5122f8416a493d" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.524835 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.528767 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a033-account-create-update-ww8m7" event={"ID":"52702304-46c3-4028-af56-60e936dea0a9","Type":"ContainerDied","Data":"81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f"} Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.528797 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.528948 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.533545 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts\") pod \"744eb619-4231-474c-a8b2-a37ed7432086\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.533672 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gj8w\" (UniqueName: \"kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w\") pod \"52702304-46c3-4028-af56-60e936dea0a9\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.533725 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzrhd\" (UniqueName: \"kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd\") pod \"744eb619-4231-474c-a8b2-a37ed7432086\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.533962 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts\") pod \"52702304-46c3-4028-af56-60e936dea0a9\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.534741 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52702304-46c3-4028-af56-60e936dea0a9" (UID: "52702304-46c3-4028-af56-60e936dea0a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.534855 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "744eb619-4231-474c-a8b2-a37ed7432086" (UID: "744eb619-4231-474c-a8b2-a37ed7432086"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.539487 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd" (OuterVolumeSpecName: "kube-api-access-pzrhd") pod "744eb619-4231-474c-a8b2-a37ed7432086" (UID: "744eb619-4231-474c-a8b2-a37ed7432086"). InnerVolumeSpecName "kube-api-access-pzrhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.555842 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w" (OuterVolumeSpecName: "kube-api-access-4gj8w") pod "52702304-46c3-4028-af56-60e936dea0a9" (UID: "52702304-46c3-4028-af56-60e936dea0a9"). InnerVolumeSpecName "kube-api-access-4gj8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.635933 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.635965 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.635975 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gj8w\" (UniqueName: \"kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.635986 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzrhd\" (UniqueName: \"kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.719983 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vrh5d"] Mar 20 07:11:22 crc kubenswrapper[5136]: W0320 07:11:22.724860 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c9bf89_c898_469c_8a83_e1b945b234a6.slice/crio-8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa WatchSource:0}: Error finding container 8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa: Status 404 returned error can't find the container with id 8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa Mar 20 07:11:23 crc kubenswrapper[5136]: I0320 07:11:23.541544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7xvp" event={"ID":"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60","Type":"ContainerStarted","Data":"df74ea59bf43247509097578b0b44714fcb954b2204d1d34decc8550e92f3f6e"} Mar 20 07:11:23 crc kubenswrapper[5136]: I0320 07:11:23.544602 5136 generic.go:334] "Generic (PLEG): container finished" podID="c6c9bf89-c898-469c-8a83-e1b945b234a6" containerID="2e4cee4a85209760afcb1fc4e1920e495e69a4a4c4fbdedacaa3ff6869eb619f" exitCode=0 Mar 20 07:11:23 crc kubenswrapper[5136]: I0320 07:11:23.544640 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vrh5d" event={"ID":"c6c9bf89-c898-469c-8a83-e1b945b234a6","Type":"ContainerDied","Data":"2e4cee4a85209760afcb1fc4e1920e495e69a4a4c4fbdedacaa3ff6869eb619f"} Mar 20 07:11:23 crc kubenswrapper[5136]: I0320 07:11:23.544661 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vrh5d" event={"ID":"c6c9bf89-c898-469c-8a83-e1b945b234a6","Type":"ContainerStarted","Data":"8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa"} Mar 20 07:11:23 crc kubenswrapper[5136]: I0320 07:11:23.561935 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-v7xvp" podStartSLOduration=2.5444660260000003 podStartE2EDuration="6.561919168s" podCreationTimestamp="2026-03-20 07:11:17 +0000 UTC" firstStartedPulling="2026-03-20 07:11:18.317548837 +0000 UTC m=+1310.576859988" lastFinishedPulling="2026-03-20 07:11:22.335001979 +0000 UTC m=+1314.594313130" observedRunningTime="2026-03-20 07:11:23.555234985 +0000 UTC m=+1315.814546156" watchObservedRunningTime="2026-03-20 07:11:23.561919168 +0000 UTC m=+1315.821230309" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.059316 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ldzkm"] Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.059701 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744eb619-4231-474c-a8b2-a37ed7432086" containerName="mariadb-database-create" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.059723 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="744eb619-4231-474c-a8b2-a37ed7432086" containerName="mariadb-database-create" Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.059744 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52702304-46c3-4028-af56-60e936dea0a9" containerName="mariadb-account-create-update" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.059754 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="52702304-46c3-4028-af56-60e936dea0a9" containerName="mariadb-account-create-update" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.059970 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="744eb619-4231-474c-a8b2-a37ed7432086" containerName="mariadb-database-create" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.059999 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="52702304-46c3-4028-af56-60e936dea0a9" containerName="mariadb-account-create-update" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.060618 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.070170 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4q9lc" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.070501 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.071760 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ldzkm"] Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.160417 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.160464 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.160561 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4cjp\" (UniqueName: \"kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.160723 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.262737 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.262809 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.262855 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.262926 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4cjp\" (UniqueName: \"kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.269117 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.271288 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.277145 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.291944 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4cjp\" (UniqueName: \"kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.381180 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.508615 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261514f8_7734_423d_b15a_e83fdc2a85fd.slice/crio-3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29.scope\": RecentStats: unable to find data in memory cache]" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.556282 5136 generic.go:334] "Generic (PLEG): container finished" podID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerID="3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29" exitCode=0 Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.557197 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerDied","Data":"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29"} Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.775446 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.775995 5136 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.776017 5136 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.776133 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift podName:dd944fb6-1517-4f5b-b579-79d8f1f3da19 nodeName:}" failed. No retries permitted until 2026-03-20 07:11:32.77611354 +0000 UTC m=+1325.035424691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift") pod "swift-storage-0" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19") : configmap "swift-ring-files" not found Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.975559 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ldzkm"] Mar 20 07:11:24 crc kubenswrapper[5136]: W0320 07:11:24.979588 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8c6efdb_3b8c_4123_bfb6_a67cd416fb18.slice/crio-aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf WatchSource:0}: Error finding container aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf: Status 404 returned error can't find the container with id aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.980968 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.082406 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5crp\" (UniqueName: \"kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp\") pod \"c6c9bf89-c898-469c-8a83-e1b945b234a6\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.082774 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts\") pod \"c6c9bf89-c898-469c-8a83-e1b945b234a6\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.083214 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6c9bf89-c898-469c-8a83-e1b945b234a6" (UID: "c6c9bf89-c898-469c-8a83-e1b945b234a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.087863 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp" (OuterVolumeSpecName: "kube-api-access-s5crp") pod "c6c9bf89-c898-469c-8a83-e1b945b234a6" (UID: "c6c9bf89-c898-469c-8a83-e1b945b234a6"). InnerVolumeSpecName "kube-api-access-s5crp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.184175 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5crp\" (UniqueName: \"kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.184210 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.568787 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerStarted","Data":"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155"} Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.569020 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.569848 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ldzkm" event={"ID":"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18","Type":"ContainerStarted","Data":"aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf"} Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.571299 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vrh5d" event={"ID":"c6c9bf89-c898-469c-8a83-e1b945b234a6","Type":"ContainerDied","Data":"8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa"} Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.571316 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.571327 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.602071 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.636103932 podStartE2EDuration="57.602054562s" podCreationTimestamp="2026-03-20 07:10:28 +0000 UTC" firstStartedPulling="2026-03-20 07:10:30.222505112 +0000 UTC m=+1262.481816263" lastFinishedPulling="2026-03-20 07:10:47.188455742 +0000 UTC m=+1279.447766893" observedRunningTime="2026-03-20 07:11:25.595489503 +0000 UTC m=+1317.854800654" watchObservedRunningTime="2026-03-20 07:11:25.602054562 +0000 UTC m=+1317.861365713" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.098739 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.165409 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.165961 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="dnsmasq-dns" containerID="cri-o://bd48417c8a8842903b86c0b0297625af601775c012346c6e4a42ced3c9d81a5c" gracePeriod=10 Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.583919 5136 generic.go:334] "Generic (PLEG): container finished" podID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerID="bd48417c8a8842903b86c0b0297625af601775c012346c6e4a42ced3c9d81a5c" exitCode=0 Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.583978 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" event={"ID":"de68a814-1b9a-4aad-9841-790f24b79e9e","Type":"ContainerDied","Data":"bd48417c8a8842903b86c0b0297625af601775c012346c6e4a42ced3c9d81a5c"} Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.584003 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" event={"ID":"de68a814-1b9a-4aad-9841-790f24b79e9e","Type":"ContainerDied","Data":"2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559"} Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.584016 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.586365 5136 generic.go:334] "Generic (PLEG): container finished" podID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerID="746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71" exitCode=0 Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.587062 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerDied","Data":"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71"} Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.741135 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.920267 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config\") pod \"de68a814-1b9a-4aad-9841-790f24b79e9e\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.920345 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb\") pod \"de68a814-1b9a-4aad-9841-790f24b79e9e\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.920409 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc\") pod \"de68a814-1b9a-4aad-9841-790f24b79e9e\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.920468 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb\") pod \"de68a814-1b9a-4aad-9841-790f24b79e9e\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.920566 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl9j4\" (UniqueName: \"kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4\") pod \"de68a814-1b9a-4aad-9841-790f24b79e9e\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.926477 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4" (OuterVolumeSpecName: "kube-api-access-gl9j4") pod "de68a814-1b9a-4aad-9841-790f24b79e9e" (UID: "de68a814-1b9a-4aad-9841-790f24b79e9e"). InnerVolumeSpecName "kube-api-access-gl9j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.964494 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config" (OuterVolumeSpecName: "config") pod "de68a814-1b9a-4aad-9841-790f24b79e9e" (UID: "de68a814-1b9a-4aad-9841-790f24b79e9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.964961 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vrh5d"] Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.968673 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de68a814-1b9a-4aad-9841-790f24b79e9e" (UID: "de68a814-1b9a-4aad-9841-790f24b79e9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.972179 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vrh5d"] Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.972641 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de68a814-1b9a-4aad-9841-790f24b79e9e" (UID: "de68a814-1b9a-4aad-9841-790f24b79e9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.986404 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de68a814-1b9a-4aad-9841-790f24b79e9e" (UID: "de68a814-1b9a-4aad-9841-790f24b79e9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.022830 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.022935 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.022948 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.022963 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl9j4\" (UniqueName: \"kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.022976 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.595032 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.599035 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerStarted","Data":"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357"} Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.599315 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.631701 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.688036107 podStartE2EDuration="59.631686117s" podCreationTimestamp="2026-03-20 07:10:28 +0000 UTC" firstStartedPulling="2026-03-20 07:10:46.989919157 +0000 UTC m=+1279.249230328" lastFinishedPulling="2026-03-20 07:10:52.933569187 +0000 UTC m=+1285.192880338" observedRunningTime="2026-03-20 07:11:27.619348864 +0000 UTC m=+1319.878660025" watchObservedRunningTime="2026-03-20 07:11:27.631686117 +0000 UTC m=+1319.890997268" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.645052 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.651411 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.973781 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 20 07:11:28 crc kubenswrapper[5136]: I0320 07:11:28.408589 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c9bf89-c898-469c-8a83-e1b945b234a6" path="/var/lib/kubelet/pods/c6c9bf89-c898-469c-8a83-e1b945b234a6/volumes" Mar 20 07:11:28 crc kubenswrapper[5136]: I0320 07:11:28.409327 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" path="/var/lib/kubelet/pods/de68a814-1b9a-4aad-9841-790f24b79e9e/volumes" Mar 20 07:11:29 crc kubenswrapper[5136]: I0320 07:11:29.613484 5136 generic.go:334] "Generic (PLEG): container finished" podID="abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" containerID="df74ea59bf43247509097578b0b44714fcb954b2204d1d34decc8550e92f3f6e" exitCode=0 Mar 20 07:11:29 crc kubenswrapper[5136]: I0320 07:11:29.613561 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7xvp" event={"ID":"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60","Type":"ContainerDied","Data":"df74ea59bf43247509097578b0b44714fcb954b2204d1d34decc8550e92f3f6e"} Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495012 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gpk5l"] Mar 20 07:11:30 crc kubenswrapper[5136]: E0320 07:11:30.495317 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c9bf89-c898-469c-8a83-e1b945b234a6" containerName="mariadb-account-create-update" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495330 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c9bf89-c898-469c-8a83-e1b945b234a6" containerName="mariadb-account-create-update" Mar 20 07:11:30 crc kubenswrapper[5136]: E0320 07:11:30.495352 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="init" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495359 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="init" Mar 20 07:11:30 crc kubenswrapper[5136]: E0320 07:11:30.495368 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="dnsmasq-dns" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495374 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="dnsmasq-dns" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495525 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c9bf89-c898-469c-8a83-e1b945b234a6" containerName="mariadb-account-create-update" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495539 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="dnsmasq-dns" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.496080 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.501018 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.503884 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gpk5l"] Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.579937 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh9wn\" (UniqueName: \"kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.579983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.681772 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh9wn\" (UniqueName: \"kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.682082 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.682880 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.717092 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh9wn\" (UniqueName: \"kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.845417 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.982551 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087520 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtknk\" (UniqueName: \"kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087639 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087676 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087749 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087838 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087874 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.088443 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.089351 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.103746 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk" (OuterVolumeSpecName: "kube-api-access-xtknk") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "kube-api-access-xtknk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.108208 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.110267 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.113671 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.114251 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts" (OuterVolumeSpecName: "scripts") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189704 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtknk\" (UniqueName: \"kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189747 5136 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189757 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189766 5136 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189774 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189782 5136 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189790 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: W0320 07:11:31.288424 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c3454a_7ea9_4c46_9fc5_1cec3a2d445b.slice/crio-c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f WatchSource:0}: Error finding container c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f: Status 404 returned error can't find the container with id c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.289024 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gpk5l"] Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.630309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gpk5l" event={"ID":"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b","Type":"ContainerStarted","Data":"c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f"} Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.632940 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7xvp" event={"ID":"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60","Type":"ContainerDied","Data":"158b8904c559e5367b1f3b8f9dd4746bcb9987780df1531c47db70ae775f7d6f"} Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.632963 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="158b8904c559e5367b1f3b8f9dd4746bcb9987780df1531c47db70ae775f7d6f" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.632998 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:32 crc kubenswrapper[5136]: I0320 07:11:32.643309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gpk5l" event={"ID":"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b","Type":"ContainerStarted","Data":"27458c6d0396483e2bf32a7b77f963fd7b5299335805aa8a1978233da54516f3"} Mar 20 07:11:32 crc kubenswrapper[5136]: I0320 07:11:32.814436 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:32 crc kubenswrapper[5136]: I0320 07:11:32.823937 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:32 crc kubenswrapper[5136]: I0320 07:11:32.924079 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 20 07:11:33 crc kubenswrapper[5136]: I0320 07:11:33.652228 5136 generic.go:334] "Generic (PLEG): container finished" podID="01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" containerID="27458c6d0396483e2bf32a7b77f963fd7b5299335805aa8a1978233da54516f3" exitCode=0 Mar 20 07:11:33 crc kubenswrapper[5136]: I0320 07:11:33.652271 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gpk5l" event={"ID":"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b","Type":"ContainerDied","Data":"27458c6d0396483e2bf32a7b77f963fd7b5299335805aa8a1978233da54516f3"} Mar 20 07:11:34 crc kubenswrapper[5136]: I0320 07:11:34.815482 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" probeResult="failure" output=< Mar 20 07:11:34 crc kubenswrapper[5136]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 20 07:11:34 crc kubenswrapper[5136]: > Mar 20 07:11:34 crc kubenswrapper[5136]: I0320 07:11:34.827257 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:11:34 crc kubenswrapper[5136]: I0320 07:11:34.828038 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.062321 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gnwt6-config-zcpwv"] Mar 20 07:11:35 crc kubenswrapper[5136]: E0320 07:11:35.063046 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" containerName="swift-ring-rebalance" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.063062 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" containerName="swift-ring-rebalance" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.063277 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" containerName="swift-ring-rebalance" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.064190 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.066627 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.075982 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnwt6-config-zcpwv"] Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151380 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151437 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151486 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txpkv\" (UniqueName: \"kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151566 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151586 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253450 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253472 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253505 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253521 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253546 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txpkv\" (UniqueName: \"kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253837 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253839 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253839 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.254618 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.255471 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.271023 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txpkv\" (UniqueName: \"kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.402143 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:39 crc kubenswrapper[5136]: I0320 07:11:39.742078 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 20 07:11:39 crc kubenswrapper[5136]: I0320 07:11:39.819502 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" probeResult="failure" output=< Mar 20 07:11:39 crc kubenswrapper[5136]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 20 07:11:39 crc kubenswrapper[5136]: > Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.036621 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qrg9s"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.038263 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.050486 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qrg9s"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.137047 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.137186 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc486\" (UniqueName: \"kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.170076 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fdc6-account-create-update-sfc2q"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.171477 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.178145 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fdc6-account-create-update-sfc2q"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.201104 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.238762 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgvc4\" (UniqueName: \"kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.238852 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc486\" (UniqueName: \"kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.238928 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.238967 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.240009 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.276163 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc486\" (UniqueName: \"kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.319986 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.340936 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgvc4\" (UniqueName: \"kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.341085 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.341971 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.351414 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5429-account-create-update-kc9f7"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.352590 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.354721 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.372004 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7vvbn"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.373340 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.391197 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.397343 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgvc4\" (UniqueName: \"kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.434870 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5429-account-create-update-kc9f7"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.444252 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.444296 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.444373 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9wbc\" (UniqueName: \"kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.444537 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lvc8\" (UniqueName: \"kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.458095 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7vvbn"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.507873 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4vtvh"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.508976 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.509329 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.545242 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4vtvh"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lvc8\" (UniqueName: \"kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550398 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4svk\" (UniqueName: \"kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550436 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550455 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550472 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550501 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9wbc\" (UniqueName: \"kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.551669 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.551873 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.559889 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vs5ks"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.572038 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.576597 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vs5ks"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.578527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lvc8\" (UniqueName: \"kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.589575 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.593220 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6hlpf" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.593434 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.599112 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.602493 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9wbc\" (UniqueName: \"kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.652642 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4svk\" (UniqueName: \"kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.652727 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.653801 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.664634 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bc06-account-create-update-lm56h"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.666091 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.668011 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.675185 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.676630 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4svk\" (UniqueName: \"kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.687952 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.705490 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bc06-account-create-update-lm56h"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.753529 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.753589 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2l2\" (UniqueName: \"kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.753726 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.834125 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.855259 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.855317 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.855351 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh2l2\" (UniqueName: \"kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.855728 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdwtq\" (UniqueName: \"kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.855797 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.858983 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.859115 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.877459 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh2l2\" (UniqueName: \"kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.957778 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdwtq\" (UniqueName: \"kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.957869 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.958492 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.972579 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdwtq\" (UniqueName: \"kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:41 crc kubenswrapper[5136]: I0320 07:11:41.020927 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:41 crc kubenswrapper[5136]: I0320 07:11:41.035769 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:42 crc kubenswrapper[5136]: E0320 07:11:42.030596 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" Mar 20 07:11:42 crc kubenswrapper[5136]: E0320 07:11:42.031275 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4cjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-ldzkm_openstack(a8c6efdb-3b8c-4123-bfb6-a67cd416fb18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:11:42 crc kubenswrapper[5136]: E0320 07:11:42.033414 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-ldzkm" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.114875 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.280450 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh9wn\" (UniqueName: \"kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn\") pod \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.280765 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts\") pod \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.283120 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" (UID: "01c3454a-7ea9-4c46-9fc5-1cec3a2d445b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.293901 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn" (OuterVolumeSpecName: "kube-api-access-gh9wn") pod "01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" (UID: "01c3454a-7ea9-4c46-9fc5-1cec3a2d445b"). InnerVolumeSpecName "kube-api-access-gh9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.383301 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh9wn\" (UniqueName: \"kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.383333 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.542226 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bc06-account-create-update-lm56h"] Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.735736 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc06-account-create-update-lm56h" event={"ID":"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda","Type":"ContainerStarted","Data":"0fb532a79a9aa102a7a434267fea66c47410f2caf36ee8ef0fa620b6493b6b37"} Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.738092 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.738541 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gpk5l" event={"ID":"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b","Type":"ContainerDied","Data":"c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f"} Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.738592 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f" Mar 20 07:11:42 crc kubenswrapper[5136]: E0320 07:11:42.739396 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120\\\"\"" pod="openstack/glance-db-sync-ldzkm" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.825663 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5429-account-create-update-kc9f7"] Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.836905 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4vtvh"] Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.958775 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:11:42 crc kubenswrapper[5136]: W0320 07:11:42.963474 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd944fb6_1517_4f5b_b579_79d8f1f3da19.slice/crio-d8cd982e91f64705da20c6a48fa3020dac8ffb0c31aec91bfc9c77ff27912742 WatchSource:0}: Error finding container d8cd982e91f64705da20c6a48fa3020dac8ffb0c31aec91bfc9c77ff27912742: Status 404 returned error can't find the container with id d8cd982e91f64705da20c6a48fa3020dac8ffb0c31aec91bfc9c77ff27912742 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.005477 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fdc6-account-create-update-sfc2q"] Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.019612 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qrg9s"] Mar 20 07:11:43 crc kubenswrapper[5136]: W0320 07:11:43.027069 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f4b546d_a206_4e15_b21b_850ef44aac79.slice/crio-377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d WatchSource:0}: Error finding container 377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d: Status 404 returned error can't find the container with id 377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.038755 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vs5ks"] Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.048784 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7vvbn"] Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.064406 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnwt6-config-zcpwv"] Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.748276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vs5ks" event={"ID":"c7e7cfea-b971-447e-a166-20b4827ce7dc","Type":"ContainerStarted","Data":"30980e34c4254b1c4f948141d08320685947985bfdf2f9e08996624f149427ee"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.750742 5136 generic.go:334] "Generic (PLEG): container finished" podID="81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" containerID="5e947f339491ac05ba12abc9cb95630dcf48840148917141c549dbda5ca4a25f" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.750806 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7vvbn" event={"ID":"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5","Type":"ContainerDied","Data":"5e947f339491ac05ba12abc9cb95630dcf48840148917141c549dbda5ca4a25f"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.750887 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7vvbn" event={"ID":"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5","Type":"ContainerStarted","Data":"84bc3a1e112c37cd1f67e75a52abb8a000d51e43387e883009d8819dae89b9de"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.756433 5136 generic.go:334] "Generic (PLEG): container finished" podID="d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" containerID="0ed02eb432d6f42e0d9bf84365b12025d2b0ecfccb688b075f04ab7b6e93a89d" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.756524 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc06-account-create-update-lm56h" event={"ID":"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda","Type":"ContainerDied","Data":"0ed02eb432d6f42e0d9bf84365b12025d2b0ecfccb688b075f04ab7b6e93a89d"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.758235 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccfe42cb-9794-449c-8ad8-54d68bf21607" containerID="32a4b8b42d71b772e9ef90a830d8bb2691b008e79e6ac5eedc1a261ab6fb23b2" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.758299 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fdc6-account-create-update-sfc2q" event={"ID":"ccfe42cb-9794-449c-8ad8-54d68bf21607","Type":"ContainerDied","Data":"32a4b8b42d71b772e9ef90a830d8bb2691b008e79e6ac5eedc1a261ab6fb23b2"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.758323 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fdc6-account-create-update-sfc2q" event={"ID":"ccfe42cb-9794-449c-8ad8-54d68bf21607","Type":"ContainerStarted","Data":"e8a7f3cb69a5975bc30fc2f03d890a9002e156e4337032cea99e0a3317da2e4f"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.784181 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"d8cd982e91f64705da20c6a48fa3020dac8ffb0c31aec91bfc9c77ff27912742"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.789181 5136 generic.go:334] "Generic (PLEG): container finished" podID="52bcca3a-bd10-425e-bc7f-f78c8c4a0271" containerID="52c9595f9d03cfa1e4df7232d34e2bf01954bbb2d3d7f55b6c4baddaa2f4853a" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.789569 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4vtvh" event={"ID":"52bcca3a-bd10-425e-bc7f-f78c8c4a0271","Type":"ContainerDied","Data":"52c9595f9d03cfa1e4df7232d34e2bf01954bbb2d3d7f55b6c4baddaa2f4853a"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.789669 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4vtvh" event={"ID":"52bcca3a-bd10-425e-bc7f-f78c8c4a0271","Type":"ContainerStarted","Data":"d9fac24447701126e9a237d1bf5d69bcfb5c81fad7661f19664ae4562c54f208"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.796021 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6-config-zcpwv" event={"ID":"8e30801a-f333-4f24-b301-4e03b644b07b","Type":"ContainerStarted","Data":"152cfd50a682e083fe5fcb83f9e826724106ecbcb51dbd391cff3907a957fa98"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.796075 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6-config-zcpwv" event={"ID":"8e30801a-f333-4f24-b301-4e03b644b07b","Type":"ContainerStarted","Data":"7c314f2f80dc7cc4e88b76ae835a24e87396cc9afa36bc502acc09f41465ff1c"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.799148 5136 generic.go:334] "Generic (PLEG): container finished" podID="ec1091b0-0c0e-40a9-9131-93d8e912d0af" containerID="47ae9136918142f0659195583b1d45f1b8d098ff54fd4db577e632c9d504d4ec" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.799218 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5429-account-create-update-kc9f7" event={"ID":"ec1091b0-0c0e-40a9-9131-93d8e912d0af","Type":"ContainerDied","Data":"47ae9136918142f0659195583b1d45f1b8d098ff54fd4db577e632c9d504d4ec"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.799250 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5429-account-create-update-kc9f7" event={"ID":"ec1091b0-0c0e-40a9-9131-93d8e912d0af","Type":"ContainerStarted","Data":"1d74ef03c10f467ffa66ef9e94663926540a5700594d348d3814bd28d77786fa"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.801767 5136 generic.go:334] "Generic (PLEG): container finished" podID="4f4b546d-a206-4e15-b21b-850ef44aac79" containerID="685537caeff80758998e736f40d87da6358ae395ce8425cb44887ce77751a0c9" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.801858 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qrg9s" event={"ID":"4f4b546d-a206-4e15-b21b-850ef44aac79","Type":"ContainerDied","Data":"685537caeff80758998e736f40d87da6358ae395ce8425cb44887ce77751a0c9"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.801894 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qrg9s" event={"ID":"4f4b546d-a206-4e15-b21b-850ef44aac79","Type":"ContainerStarted","Data":"377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d"} Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.807665 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gnwt6" Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.815648 5136 generic.go:334] "Generic (PLEG): container finished" podID="8e30801a-f333-4f24-b301-4e03b644b07b" containerID="152cfd50a682e083fe5fcb83f9e826724106ecbcb51dbd391cff3907a957fa98" exitCode=0 Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.815709 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6-config-zcpwv" event={"ID":"8e30801a-f333-4f24-b301-4e03b644b07b","Type":"ContainerDied","Data":"152cfd50a682e083fe5fcb83f9e826724106ecbcb51dbd391cff3907a957fa98"} Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.818605 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539"} Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.818636 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa"} Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.818649 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.165003 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.351379 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts\") pod \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.351466 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9wbc\" (UniqueName: \"kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc\") pod \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.360008 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc" (OuterVolumeSpecName: "kube-api-access-w9wbc") pod "ec1091b0-0c0e-40a9-9131-93d8e912d0af" (UID: "ec1091b0-0c0e-40a9-9131-93d8e912d0af"). InnerVolumeSpecName "kube-api-access-w9wbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.365193 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec1091b0-0c0e-40a9-9131-93d8e912d0af" (UID: "ec1091b0-0c0e-40a9-9131-93d8e912d0af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.453710 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.453748 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9wbc\" (UniqueName: \"kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.492997 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.493345 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.499583 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.511682 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.516769 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.657391 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.657453 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.657462 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.657479 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdwtq\" (UniqueName: \"kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq\") pod \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.657581 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts\") pod \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658116 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658133 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52bcca3a-bd10-425e-bc7f-f78c8c4a0271" (UID: "52bcca3a-bd10-425e-bc7f-f78c8c4a0271"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658173 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658239 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txpkv\" (UniqueName: \"kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658271 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lvc8\" (UniqueName: \"kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8\") pod \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658322 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658374 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts\") pod \"4f4b546d-a206-4e15-b21b-850ef44aac79\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658407 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658427 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc486\" (UniqueName: \"kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486\") pod \"4f4b546d-a206-4e15-b21b-850ef44aac79\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658460 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts\") pod \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658488 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts\") pod \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658527 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4svk\" (UniqueName: \"kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk\") pod \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658685 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659017 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run" (OuterVolumeSpecName: "var-run") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659175 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" (UID: "d2b269d7-6c83-46fd-b85c-5d9dba5ccbda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659463 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f4b546d-a206-4e15-b21b-850ef44aac79" (UID: "4f4b546d-a206-4e15-b21b-850ef44aac79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659484 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659516 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" (UID: "81ba128a-ff3d-42a9-aa76-04e60b3a2cb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659914 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts" (OuterVolumeSpecName: "scripts") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660416 5136 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660438 5136 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660449 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660458 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660466 5136 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660473 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660481 5136 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660489 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660497 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.662021 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq" (OuterVolumeSpecName: "kube-api-access-pdwtq") pod "d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" (UID: "d2b269d7-6c83-46fd-b85c-5d9dba5ccbda"). InnerVolumeSpecName "kube-api-access-pdwtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.662387 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486" (OuterVolumeSpecName: "kube-api-access-wc486") pod "4f4b546d-a206-4e15-b21b-850ef44aac79" (UID: "4f4b546d-a206-4e15-b21b-850ef44aac79"). InnerVolumeSpecName "kube-api-access-wc486". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.662914 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8" (OuterVolumeSpecName: "kube-api-access-2lvc8") pod "81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" (UID: "81ba128a-ff3d-42a9-aa76-04e60b3a2cb5"). InnerVolumeSpecName "kube-api-access-2lvc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.663701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv" (OuterVolumeSpecName: "kube-api-access-txpkv") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "kube-api-access-txpkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.663967 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk" (OuterVolumeSpecName: "kube-api-access-v4svk") pod "52bcca3a-bd10-425e-bc7f-f78c8c4a0271" (UID: "52bcca3a-bd10-425e-bc7f-f78c8c4a0271"). InnerVolumeSpecName "kube-api-access-v4svk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.761543 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts\") pod \"ccfe42cb-9794-449c-8ad8-54d68bf21607\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.761916 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgvc4\" (UniqueName: \"kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4\") pod \"ccfe42cb-9794-449c-8ad8-54d68bf21607\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.761988 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccfe42cb-9794-449c-8ad8-54d68bf21607" (UID: "ccfe42cb-9794-449c-8ad8-54d68bf21607"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.762931 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.763038 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txpkv\" (UniqueName: \"kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.763115 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lvc8\" (UniqueName: \"kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.763201 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc486\" (UniqueName: \"kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.763306 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4svk\" (UniqueName: \"kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.763421 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdwtq\" (UniqueName: \"kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.764472 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4" (OuterVolumeSpecName: "kube-api-access-cgvc4") pod "ccfe42cb-9794-449c-8ad8-54d68bf21607" (UID: "ccfe42cb-9794-449c-8ad8-54d68bf21607"). InnerVolumeSpecName "kube-api-access-cgvc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.825251 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.825300 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.835166 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.835168 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc06-account-create-update-lm56h" event={"ID":"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda","Type":"ContainerDied","Data":"0fb532a79a9aa102a7a434267fea66c47410f2caf36ee8ef0fa620b6493b6b37"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.835303 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb532a79a9aa102a7a434267fea66c47410f2caf36ee8ef0fa620b6493b6b37" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.837404 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fdc6-account-create-update-sfc2q" event={"ID":"ccfe42cb-9794-449c-8ad8-54d68bf21607","Type":"ContainerDied","Data":"e8a7f3cb69a5975bc30fc2f03d890a9002e156e4337032cea99e0a3317da2e4f"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.837423 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8a7f3cb69a5975bc30fc2f03d890a9002e156e4337032cea99e0a3317da2e4f" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.837463 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.838661 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7vvbn" event={"ID":"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5","Type":"ContainerDied","Data":"84bc3a1e112c37cd1f67e75a52abb8a000d51e43387e883009d8819dae89b9de"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.838678 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84bc3a1e112c37cd1f67e75a52abb8a000d51e43387e883009d8819dae89b9de" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.838748 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.840489 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4vtvh" event={"ID":"52bcca3a-bd10-425e-bc7f-f78c8c4a0271","Type":"ContainerDied","Data":"d9fac24447701126e9a237d1bf5d69bcfb5c81fad7661f19664ae4562c54f208"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.840505 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9fac24447701126e9a237d1bf5d69bcfb5c81fad7661f19664ae4562c54f208" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.840520 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.841936 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6-config-zcpwv" event={"ID":"8e30801a-f333-4f24-b301-4e03b644b07b","Type":"ContainerDied","Data":"7c314f2f80dc7cc4e88b76ae835a24e87396cc9afa36bc502acc09f41465ff1c"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.841955 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c314f2f80dc7cc4e88b76ae835a24e87396cc9afa36bc502acc09f41465ff1c" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.842043 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.850419 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5429-account-create-update-kc9f7" event={"ID":"ec1091b0-0c0e-40a9-9131-93d8e912d0af","Type":"ContainerDied","Data":"1d74ef03c10f467ffa66ef9e94663926540a5700594d348d3814bd28d77786fa"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.850456 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.850467 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d74ef03c10f467ffa66ef9e94663926540a5700594d348d3814bd28d77786fa" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.851488 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qrg9s" event={"ID":"4f4b546d-a206-4e15-b21b-850ef44aac79","Type":"ContainerDied","Data":"377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.851514 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.851592 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.854528 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.864717 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgvc4\" (UniqueName: \"kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:46 crc kubenswrapper[5136]: I0320 07:11:46.625067 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gnwt6-config-zcpwv"] Mar 20 07:11:46 crc kubenswrapper[5136]: I0320 07:11:46.632923 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gnwt6-config-zcpwv"] Mar 20 07:11:46 crc kubenswrapper[5136]: I0320 07:11:46.969329 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gpk5l"] Mar 20 07:11:46 crc kubenswrapper[5136]: I0320 07:11:46.979206 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gpk5l"] Mar 20 07:11:48 crc kubenswrapper[5136]: I0320 07:11:48.408386 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" path="/var/lib/kubelet/pods/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b/volumes" Mar 20 07:11:48 crc kubenswrapper[5136]: I0320 07:11:48.409332 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e30801a-f333-4f24-b301-4e03b644b07b" path="/var/lib/kubelet/pods/8e30801a-f333-4f24-b301-4e03b644b07b/volumes" Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.916628 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67"} Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.917007 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6"} Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.917024 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3"} Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.917037 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3"} Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.919709 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vs5ks" event={"ID":"c7e7cfea-b971-447e-a166-20b4827ce7dc","Type":"ContainerStarted","Data":"ca34610c300fb63b0b8b7fa75b8b5e36ec0f7e9d15dbda229381348a1e3e55be"} Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.944539 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vs5ks" podStartSLOduration=4.032194541 podStartE2EDuration="10.944518433s" podCreationTimestamp="2026-03-20 07:11:40 +0000 UTC" firstStartedPulling="2026-03-20 07:11:43.044420769 +0000 UTC m=+1335.303731920" lastFinishedPulling="2026-03-20 07:11:49.956744661 +0000 UTC m=+1342.216055812" observedRunningTime="2026-03-20 07:11:50.935225651 +0000 UTC m=+1343.194536802" watchObservedRunningTime="2026-03-20 07:11:50.944518433 +0000 UTC m=+1343.203829604" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.032854 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-b5fwk"] Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033512 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1091b0-0c0e-40a9-9131-93d8e912d0af" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033528 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1091b0-0c0e-40a9-9131-93d8e912d0af" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033547 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bcca3a-bd10-425e-bc7f-f78c8c4a0271" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033555 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bcca3a-bd10-425e-bc7f-f78c8c4a0271" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033572 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e30801a-f333-4f24-b301-4e03b644b07b" containerName="ovn-config" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033579 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e30801a-f333-4f24-b301-4e03b644b07b" containerName="ovn-config" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033591 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033598 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033610 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4b546d-a206-4e15-b21b-850ef44aac79" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033617 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4b546d-a206-4e15-b21b-850ef44aac79" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033632 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033640 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033663 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfe42cb-9794-449c-8ad8-54d68bf21607" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033670 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfe42cb-9794-449c-8ad8-54d68bf21607" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033683 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033690 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033879 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033895 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e30801a-f333-4f24-b301-4e03b644b07b" containerName="ovn-config" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033910 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1091b0-0c0e-40a9-9131-93d8e912d0af" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033920 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="52bcca3a-bd10-425e-bc7f-f78c8c4a0271" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033931 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfe42cb-9794-449c-8ad8-54d68bf21607" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033942 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033957 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4b546d-a206-4e15-b21b-850ef44aac79" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033967 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.034563 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.040138 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.048595 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b5fwk"] Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.171584 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9bjh\" (UniqueName: \"kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.171653 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.272950 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9bjh\" (UniqueName: \"kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.273016 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.273938 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.292550 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9bjh\" (UniqueName: \"kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.368333 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.088162 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b5fwk"] Mar 20 07:11:53 crc kubenswrapper[5136]: W0320 07:11:53.097664 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc44ee109_b721_41c2_bc45_8c6097d31402.slice/crio-893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d WatchSource:0}: Error finding container 893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d: Status 404 returned error can't find the container with id 893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.949887 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7e7cfea-b971-447e-a166-20b4827ce7dc" containerID="ca34610c300fb63b0b8b7fa75b8b5e36ec0f7e9d15dbda229381348a1e3e55be" exitCode=0 Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.950100 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vs5ks" event={"ID":"c7e7cfea-b971-447e-a166-20b4827ce7dc","Type":"ContainerDied","Data":"ca34610c300fb63b0b8b7fa75b8b5e36ec0f7e9d15dbda229381348a1e3e55be"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.951915 5136 generic.go:334] "Generic (PLEG): container finished" podID="c44ee109-b721-41c2-bc45-8c6097d31402" containerID="80afd4ebec7d57a2a5f4e5804fe0cafa6290530e8266af5fe943abb82f8b0a3e" exitCode=0 Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.951977 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5fwk" event={"ID":"c44ee109-b721-41c2-bc45-8c6097d31402","Type":"ContainerDied","Data":"80afd4ebec7d57a2a5f4e5804fe0cafa6290530e8266af5fe943abb82f8b0a3e"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.952000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5fwk" event={"ID":"c44ee109-b721-41c2-bc45-8c6097d31402","Type":"ContainerStarted","Data":"893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963505 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963539 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963563 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963577 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963592 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43"} Mar 20 07:11:54 crc kubenswrapper[5136]: I0320 07:11:54.992135 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a"} Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.309522 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=30.522408749 podStartE2EDuration="40.309506201s" podCreationTimestamp="2026-03-20 07:11:15 +0000 UTC" firstStartedPulling="2026-03-20 07:11:42.969291889 +0000 UTC m=+1335.228603040" lastFinishedPulling="2026-03-20 07:11:52.756389341 +0000 UTC m=+1345.015700492" observedRunningTime="2026-03-20 07:11:55.046883252 +0000 UTC m=+1347.306194423" watchObservedRunningTime="2026-03-20 07:11:55.309506201 +0000 UTC m=+1347.568817352" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.318317 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.319627 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.324188 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.350688 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.367248 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.383586 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433505 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ch26\" (UniqueName: \"kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433549 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433662 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433679 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433697 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433801 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.535526 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle\") pod \"c7e7cfea-b971-447e-a166-20b4827ce7dc\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.535831 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts\") pod \"c44ee109-b721-41c2-bc45-8c6097d31402\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.535910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9bjh\" (UniqueName: \"kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh\") pod \"c44ee109-b721-41c2-bc45-8c6097d31402\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.535940 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh2l2\" (UniqueName: \"kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2\") pod \"c7e7cfea-b971-447e-a166-20b4827ce7dc\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.535959 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data\") pod \"c7e7cfea-b971-447e-a166-20b4827ce7dc\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536126 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ch26\" (UniqueName: \"kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536157 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536273 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536288 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536306 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536335 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.537264 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.538689 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c44ee109-b721-41c2-bc45-8c6097d31402" (UID: "c44ee109-b721-41c2-bc45-8c6097d31402"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.544503 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh" (OuterVolumeSpecName: "kube-api-access-b9bjh") pod "c44ee109-b721-41c2-bc45-8c6097d31402" (UID: "c44ee109-b721-41c2-bc45-8c6097d31402"). InnerVolumeSpecName "kube-api-access-b9bjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.544973 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.545515 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.545806 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.546346 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.563022 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2" (OuterVolumeSpecName: "kube-api-access-mh2l2") pod "c7e7cfea-b971-447e-a166-20b4827ce7dc" (UID: "c7e7cfea-b971-447e-a166-20b4827ce7dc"). InnerVolumeSpecName "kube-api-access-mh2l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.582960 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e7cfea-b971-447e-a166-20b4827ce7dc" (UID: "c7e7cfea-b971-447e-a166-20b4827ce7dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.584996 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ch26\" (UniqueName: \"kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.625002 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data" (OuterVolumeSpecName: "config-data") pod "c7e7cfea-b971-447e-a166-20b4827ce7dc" (UID: "c7e7cfea-b971-447e-a166-20b4827ce7dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.637844 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9bjh\" (UniqueName: \"kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.638015 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh2l2\" (UniqueName: \"kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.638070 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.638117 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.638195 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.684210 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.011605 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vs5ks" event={"ID":"c7e7cfea-b971-447e-a166-20b4827ce7dc","Type":"ContainerDied","Data":"30980e34c4254b1c4f948141d08320685947985bfdf2f9e08996624f149427ee"} Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.011908 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30980e34c4254b1c4f948141d08320685947985bfdf2f9e08996624f149427ee" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.011618 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.013451 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.014924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5fwk" event={"ID":"c44ee109-b721-41c2-bc45-8c6097d31402","Type":"ContainerDied","Data":"893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d"} Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.014955 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.144124 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.158484 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.202782 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fvfgp"] Mar 20 07:11:56 crc kubenswrapper[5136]: E0320 07:11:56.203572 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44ee109-b721-41c2-bc45-8c6097d31402" containerName="mariadb-account-create-update" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.203637 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44ee109-b721-41c2-bc45-8c6097d31402" containerName="mariadb-account-create-update" Mar 20 07:11:56 crc kubenswrapper[5136]: E0320 07:11:56.203707 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e7cfea-b971-447e-a166-20b4827ce7dc" containerName="keystone-db-sync" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.203771 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e7cfea-b971-447e-a166-20b4827ce7dc" containerName="keystone-db-sync" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.203979 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44ee109-b721-41c2-bc45-8c6097d31402" containerName="mariadb-account-create-update" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.204047 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e7cfea-b971-447e-a166-20b4827ce7dc" containerName="keystone-db-sync" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.204594 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.208214 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.208547 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.208782 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.209015 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.209143 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6hlpf" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.216065 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.217335 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.223643 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fvfgp"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.240541 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355797 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355867 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355891 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355908 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355975 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6k46\" (UniqueName: \"kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355989 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgz5\" (UniqueName: \"kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356008 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356029 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356062 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356101 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356116 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.458499 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-llt2h"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.459552 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461248 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-llt2h"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461397 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461430 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461451 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461470 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461495 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461514 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461531 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461592 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6k46\" (UniqueName: \"kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461606 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jgz5\" (UniqueName: \"kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461624 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461645 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461681 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.463175 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.463903 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.464627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.464913 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.471850 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ps866" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.472320 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.472652 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.473524 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.480756 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.489531 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.489784 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.489985 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.516606 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.523209 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kxk7p"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.538078 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.538715 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jgz5\" (UniqueName: \"kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.542793 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6k46\" (UniqueName: \"kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.544753 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.545230 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-scqxf" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.547014 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.573320 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.573702 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.573764 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skhx\" (UniqueName: \"kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.575777 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.576796 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.577124 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.577251 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.587173 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-n6cqg"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.588846 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.597285 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.597865 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4t29g" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.598087 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.617151 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.675934 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kxk7p"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688242 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688289 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688326 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp59n\" (UniqueName: \"kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688392 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688437 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688458 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5bh\" (UniqueName: \"kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688489 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688524 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688543 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688561 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688589 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688633 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4skhx\" (UniqueName: \"kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688654 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688674 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688739 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.692355 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.698194 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.698942 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.699156 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.707088 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jv7f9"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.708424 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.712511 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.712770 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tz5pc" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.720511 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.724588 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4skhx\" (UniqueName: \"kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.727608 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jv7f9"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.739004 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n6cqg"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.766095 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.768188 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.768323 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.778172 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.780100 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.781692 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.782033 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790246 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790297 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9rd\" (UniqueName: \"kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790352 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790370 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790391 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp59n\" (UniqueName: \"kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790448 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5bh\" (UniqueName: \"kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790492 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790520 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790540 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790567 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.794033 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.799597 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.801168 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.803073 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.807600 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.807685 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.808086 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.836608 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5bh\" (UniqueName: \"kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.837029 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp59n\" (UniqueName: \"kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893374 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893635 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893657 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893671 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893686 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893722 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893749 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893782 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s9rd\" (UniqueName: \"kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893798 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7dlp\" (UniqueName: \"kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893855 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mmcd\" (UniqueName: \"kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.894203 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.894253 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.894295 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.894311 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.894357 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.905440 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.911741 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.920201 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.925715 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s9rd\" (UniqueName: \"kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.935676 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.950207 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996519 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996564 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996585 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996601 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996648 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996674 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996691 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996707 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996721 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996739 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996769 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996801 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7dlp\" (UniqueName: \"kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996872 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mmcd\" (UniqueName: \"kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.001017 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.002297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.002905 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.004295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.004834 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.006210 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.006951 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.007334 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.008009 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.011392 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.021256 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.027965 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7dlp\" (UniqueName: \"kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.028391 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mmcd\" (UniqueName: \"kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.051569 5136 generic.go:334] "Generic (PLEG): container finished" podID="d3270013-a1f2-43bd-8f40-38b10b4253a1" containerID="bf2834c66b5c522715063b4ba5f30618173a273c2bbf5480ab6f45f292898e0c" exitCode=0 Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.051669 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" event={"ID":"d3270013-a1f2-43bd-8f40-38b10b4253a1","Type":"ContainerDied","Data":"bf2834c66b5c522715063b4ba5f30618173a273c2bbf5480ab6f45f292898e0c"} Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.051696 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" event={"ID":"d3270013-a1f2-43bd-8f40-38b10b4253a1","Type":"ContainerStarted","Data":"446c6c3ae62c18c1fedd61d3a75b7402468673bf89bbfafebbd571740ddaa669"} Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.056347 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.063896 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ldzkm" event={"ID":"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18","Type":"ContainerStarted","Data":"dd50ea3e8d708d6e3b7b256a2ea07c9211cc4921a494661346061b12daf9a3f3"} Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.106475 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ldzkm" podStartSLOduration=2.216441305 podStartE2EDuration="33.106455596s" podCreationTimestamp="2026-03-20 07:11:24 +0000 UTC" firstStartedPulling="2026-03-20 07:11:24.981163222 +0000 UTC m=+1317.240474373" lastFinishedPulling="2026-03-20 07:11:55.871177513 +0000 UTC m=+1348.130488664" observedRunningTime="2026-03-20 07:11:57.097664979 +0000 UTC m=+1349.356976130" watchObservedRunningTime="2026-03-20 07:11:57.106455596 +0000 UTC m=+1349.365766747" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.154304 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.165309 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.319203 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fvfgp"] Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.501517 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.706893 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kxk7p"] Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.724628 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.735867 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n6cqg"] Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.760148 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-llt2h"] Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834547 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ch26\" (UniqueName: \"kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834695 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834757 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834845 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834886 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834900 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.842344 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26" (OuterVolumeSpecName: "kube-api-access-2ch26") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "kube-api-access-2ch26". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.868201 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.872361 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.926854 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.937009 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ch26\" (UniqueName: \"kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.937043 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.937053 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.937060 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.961317 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.980553 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config" (OuterVolumeSpecName: "config") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.007628 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.043751 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.043787 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.089511 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kxk7p" event={"ID":"4f5241dc-9fdc-4e75-9924-fb00a2e6119d","Type":"ContainerStarted","Data":"22bd79d8d32272633a42a92ee1e9e96d3d3259073a33ca0ea587ca787429e836"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.089558 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kxk7p" event={"ID":"4f5241dc-9fdc-4e75-9924-fb00a2e6119d","Type":"ContainerStarted","Data":"9c68bfb878007ca6b62fba813ddcba80cdee73ef5b85374234305710364f4b28"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.106842 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fvfgp" event={"ID":"eb6007d0-4c13-43d0-b1b9-9e452fa9357f","Type":"ContainerStarted","Data":"624feab47793180e2f843804104146a4e3de4528636c1ebc9f47f7172993b072"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.106895 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fvfgp" event={"ID":"eb6007d0-4c13-43d0-b1b9-9e452fa9357f","Type":"ContainerStarted","Data":"61d7579d5cf960da949624b7fe0fdc10cd43078fd38353bfd5da73acb2c3a781"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.120643 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n6cqg" event={"ID":"61300b5b-7c36-4857-a0bf-631bf3cbb001","Type":"ContainerStarted","Data":"b5e8ad0f3ddd8fc1ff41409f21a655282a69b3d531e79a60604f206a303c07a7"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.122362 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerStarted","Data":"b70abbe701b5afa37deb9280d8bab4f32e4ab209764879ee00b0808064143809"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.123785 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-llt2h" event={"ID":"2fc03366-82a1-4e30-a7e8-a06e16a8a14f","Type":"ContainerStarted","Data":"5200ac7ba7db438f0d107516ee128664a59b5fde08b29a5ce98e40e534824c47"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.125871 5136 generic.go:334] "Generic (PLEG): container finished" podID="ceda50a9-97c2-4310-b7ab-444024c33a87" containerID="e4b8e8eedb7b9d25ee4c3ed5740071573702f9421fc7bc697e1b313ba902496c" exitCode=0 Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.125960 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" event={"ID":"ceda50a9-97c2-4310-b7ab-444024c33a87","Type":"ContainerDied","Data":"e4b8e8eedb7b9d25ee4c3ed5740071573702f9421fc7bc697e1b313ba902496c"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.125992 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" event={"ID":"ceda50a9-97c2-4310-b7ab-444024c33a87","Type":"ContainerStarted","Data":"46bdbf8830a8054f8b2b9f249b53f0ef2256c1a4057d143ebe31897d3c9eecc6"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.134785 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kxk7p" podStartSLOduration=2.134763168 podStartE2EDuration="2.134763168s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:11:58.113184253 +0000 UTC m=+1350.372495404" watchObservedRunningTime="2026-03-20 07:11:58.134763168 +0000 UTC m=+1350.394074319" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.141462 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fvfgp" podStartSLOduration=2.14144124 podStartE2EDuration="2.14144124s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:11:58.128830558 +0000 UTC m=+1350.388141699" watchObservedRunningTime="2026-03-20 07:11:58.14144124 +0000 UTC m=+1350.400752381" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.146741 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" event={"ID":"d3270013-a1f2-43bd-8f40-38b10b4253a1","Type":"ContainerDied","Data":"446c6c3ae62c18c1fedd61d3a75b7402468673bf89bbfafebbd571740ddaa669"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.146795 5136 scope.go:117] "RemoveContainer" containerID="bf2834c66b5c522715063b4ba5f30618173a273c2bbf5480ab6f45f292898e0c" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.146987 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.196739 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jv7f9"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.247878 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.294224 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.344242 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.466274 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3270013-a1f2-43bd-8f40-38b10b4253a1" path="/var/lib/kubelet/pods/d3270013-a1f2-43bd-8f40-38b10b4253a1/volumes" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.655073 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.661158 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.763599 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.763660 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.763742 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.764936 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.765009 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.765057 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6k46\" (UniqueName: \"kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.772914 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46" (OuterVolumeSpecName: "kube-api-access-z6k46") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "kube-api-access-z6k46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.797547 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config" (OuterVolumeSpecName: "config") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.799509 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.799646 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.800596 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.809770 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867193 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867226 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867236 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6k46\" (UniqueName: \"kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867246 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867255 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867264 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.158034 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jv7f9" event={"ID":"16f28a76-f7a5-4980-a693-7bd078f3c128","Type":"ContainerStarted","Data":"a6c0c7cbe316e747e497558676af55967b6ed940767c0667d59d4da80f64920a"} Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.161266 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.161264 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" event={"ID":"ceda50a9-97c2-4310-b7ab-444024c33a87","Type":"ContainerDied","Data":"46bdbf8830a8054f8b2b9f249b53f0ef2256c1a4057d143ebe31897d3c9eecc6"} Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.161308 5136 scope.go:117] "RemoveContainer" containerID="e4b8e8eedb7b9d25ee4c3ed5740071573702f9421fc7bc697e1b313ba902496c" Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.168544 5136 generic.go:334] "Generic (PLEG): container finished" podID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerID="d619f45ef012eb3e909a4c91d853e16f0aac41aa3f7c34e99d85f79a8050ee1a" exitCode=0 Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.168589 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" event={"ID":"98a77e70-cc82-4a51-8475-d003a0ccf43e","Type":"ContainerDied","Data":"d619f45ef012eb3e909a4c91d853e16f0aac41aa3f7c34e99d85f79a8050ee1a"} Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.168632 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" event={"ID":"98a77e70-cc82-4a51-8475-d003a0ccf43e","Type":"ContainerStarted","Data":"b87a47428636e1da6de88425ef519d1313ec7d6e857d9dadf3b1f683fef7c84b"} Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.252576 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.265229 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.143726 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566512-lrvjf"] Mar 20 07:12:00 crc kubenswrapper[5136]: E0320 07:12:00.144390 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3270013-a1f2-43bd-8f40-38b10b4253a1" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.144407 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3270013-a1f2-43bd-8f40-38b10b4253a1" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: E0320 07:12:00.144418 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceda50a9-97c2-4310-b7ab-444024c33a87" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.144425 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceda50a9-97c2-4310-b7ab-444024c33a87" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.144602 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceda50a9-97c2-4310-b7ab-444024c33a87" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.144624 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3270013-a1f2-43bd-8f40-38b10b4253a1" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.145162 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.158617 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.158691 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.158762 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.198320 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566512-lrvjf"] Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.201306 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" event={"ID":"98a77e70-cc82-4a51-8475-d003a0ccf43e","Type":"ContainerStarted","Data":"60532e8d3b2de1260b02b04d62ab8b4d0eed7842744135d5425179cf256cd7d4"} Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.202883 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.227919 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" podStartSLOduration=4.22788727 podStartE2EDuration="4.22788727s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:00.223052133 +0000 UTC m=+1352.482363304" watchObservedRunningTime="2026-03-20 07:12:00.22788727 +0000 UTC m=+1352.487198421" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.298205 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg925\" (UniqueName: \"kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925\") pod \"auto-csr-approver-29566512-lrvjf\" (UID: \"4ace6934-986e-463e-8e10-ea2d38d8657b\") " pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.400046 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg925\" (UniqueName: \"kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925\") pod \"auto-csr-approver-29566512-lrvjf\" (UID: \"4ace6934-986e-463e-8e10-ea2d38d8657b\") " pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.430135 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg925\" (UniqueName: \"kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925\") pod \"auto-csr-approver-29566512-lrvjf\" (UID: \"4ace6934-986e-463e-8e10-ea2d38d8657b\") " pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.440424 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceda50a9-97c2-4310-b7ab-444024c33a87" path="/var/lib/kubelet/pods/ceda50a9-97c2-4310-b7ab-444024c33a87/volumes" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.469927 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:02 crc kubenswrapper[5136]: I0320 07:12:02.225061 5136 generic.go:334] "Generic (PLEG): container finished" podID="eb6007d0-4c13-43d0-b1b9-9e452fa9357f" containerID="624feab47793180e2f843804104146a4e3de4528636c1ebc9f47f7172993b072" exitCode=0 Mar 20 07:12:02 crc kubenswrapper[5136]: I0320 07:12:02.225342 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fvfgp" event={"ID":"eb6007d0-4c13-43d0-b1b9-9e452fa9357f","Type":"ContainerDied","Data":"624feab47793180e2f843804104146a4e3de4528636c1ebc9f47f7172993b072"} Mar 20 07:12:03 crc kubenswrapper[5136]: I0320 07:12:03.108282 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566512-lrvjf"] Mar 20 07:12:07 crc kubenswrapper[5136]: I0320 07:12:07.156981 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:12:07 crc kubenswrapper[5136]: I0320 07:12:07.215722 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:12:07 crc kubenswrapper[5136]: I0320 07:12:07.216142 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" containerID="cri-o://604b652a792660f1238e2607b4242155d6fa3281d34ce55590b668cd26222f1b" gracePeriod=10 Mar 20 07:12:07 crc kubenswrapper[5136]: W0320 07:12:07.811517 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ace6934_986e_463e_8e10_ea2d38d8657b.slice/crio-568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f WatchSource:0}: Error finding container 568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f: Status 404 returned error can't find the container with id 568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f Mar 20 07:12:07 crc kubenswrapper[5136]: I0320 07:12:07.928107 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044102 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044329 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jgz5\" (UniqueName: \"kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044359 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044380 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044417 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.051090 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5" (OuterVolumeSpecName: "kube-api-access-2jgz5") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "kube-api-access-2jgz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.064990 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.065037 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts" (OuterVolumeSpecName: "scripts") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.066280 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.075773 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.090959 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data" (OuterVolumeSpecName: "config-data") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146190 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146226 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146238 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146247 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jgz5\" (UniqueName: \"kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146255 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146263 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.289957 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fvfgp" event={"ID":"eb6007d0-4c13-43d0-b1b9-9e452fa9357f","Type":"ContainerDied","Data":"61d7579d5cf960da949624b7fe0fdc10cd43078fd38353bfd5da73acb2c3a781"} Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.289999 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d7579d5cf960da949624b7fe0fdc10cd43078fd38353bfd5da73acb2c3a781" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.290061 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.293766 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" event={"ID":"4ace6934-986e-463e-8e10-ea2d38d8657b","Type":"ContainerStarted","Data":"568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f"} Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.296012 5136 generic.go:334] "Generic (PLEG): container finished" podID="d103abed-83b7-44e9-bc7f-786434426647" containerID="604b652a792660f1238e2607b4242155d6fa3281d34ce55590b668cd26222f1b" exitCode=0 Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.296041 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" event={"ID":"d103abed-83b7-44e9-bc7f-786434426647","Type":"ContainerDied","Data":"604b652a792660f1238e2607b4242155d6fa3281d34ce55590b668cd26222f1b"} Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.015963 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fvfgp"] Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.023281 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fvfgp"] Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.116184 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xztql"] Mar 20 07:12:09 crc kubenswrapper[5136]: E0320 07:12:09.116558 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6007d0-4c13-43d0-b1b9-9e452fa9357f" containerName="keystone-bootstrap" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.116578 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6007d0-4c13-43d0-b1b9-9e452fa9357f" containerName="keystone-bootstrap" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.116746 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6007d0-4c13-43d0-b1b9-9e452fa9357f" containerName="keystone-bootstrap" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.117286 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.118880 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.120126 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.121102 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6hlpf" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.121189 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.121286 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.128115 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xztql"] Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161222 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161302 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161343 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161404 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161458 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdg86\" (UniqueName: \"kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262221 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262287 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262334 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262372 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262424 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262447 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdg86\" (UniqueName: \"kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.267856 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.268079 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.268603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.269550 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.270173 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.286577 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdg86\" (UniqueName: \"kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.469084 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:10 crc kubenswrapper[5136]: I0320 07:12:10.430467 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6007d0-4c13-43d0-b1b9-9e452fa9357f" path="/var/lib/kubelet/pods/eb6007d0-4c13-43d0-b1b9-9e452fa9357f/volumes" Mar 20 07:12:11 crc kubenswrapper[5136]: E0320 07:12:11.853997 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:1240a45aec9c3e1599be762c5565556560849b49fd39c7283b8e5519dcaa501a" Mar 20 07:12:11 crc kubenswrapper[5136]: E0320 07:12:11.854187 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:1240a45aec9c3e1599be762c5565556560849b49fd39c7283b8e5519dcaa501a,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6s9rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-jv7f9_openstack(16f28a76-f7a5-4980-a693-7bd078f3c128): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:12:11 crc kubenswrapper[5136]: E0320 07:12:11.855407 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-jv7f9" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" Mar 20 07:12:12 crc kubenswrapper[5136]: E0320 07:12:12.327645 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:1240a45aec9c3e1599be762c5565556560849b49fd39c7283b8e5519dcaa501a\\\"\"" pod="openstack/barbican-db-sync-jv7f9" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" Mar 20 07:12:15 crc kubenswrapper[5136]: E0320 07:12:15.621279 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8c6efdb_3b8c_4123_bfb6_a67cd416fb18.slice/crio-dd50ea3e8d708d6e3b7b256a2ea07c9211cc4921a494661346061b12daf9a3f3.scope\": RecentStats: unable to find data in memory cache]" Mar 20 07:12:15 crc kubenswrapper[5136]: I0320 07:12:15.821772 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:12:15 crc kubenswrapper[5136]: I0320 07:12:15.821851 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:12:15 crc kubenswrapper[5136]: I0320 07:12:15.821898 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:12:15 crc kubenswrapper[5136]: I0320 07:12:15.822519 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:12:15 crc kubenswrapper[5136]: I0320 07:12:15.822572 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f" gracePeriod=600 Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.097491 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: i/o timeout" Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.363082 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f" exitCode=0 Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.363159 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f"} Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.363468 5136 scope.go:117] "RemoveContainer" containerID="f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f" Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.365001 5136 generic.go:334] "Generic (PLEG): container finished" podID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" containerID="dd50ea3e8d708d6e3b7b256a2ea07c9211cc4921a494661346061b12daf9a3f3" exitCode=0 Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.365023 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ldzkm" event={"ID":"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18","Type":"ContainerDied","Data":"dd50ea3e8d708d6e3b7b256a2ea07c9211cc4921a494661346061b12daf9a3f3"} Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.653417 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ldzkm" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.659962 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.764843 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wch7g\" (UniqueName: \"kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g\") pod \"d103abed-83b7-44e9-bc7f-786434426647\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.764910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle\") pod \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.764937 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config\") pod \"d103abed-83b7-44e9-bc7f-786434426647\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.764957 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data\") pod \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.764974 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc\") pod \"d103abed-83b7-44e9-bc7f-786434426647\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.765002 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4cjp\" (UniqueName: \"kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp\") pod \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.765117 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb\") pod \"d103abed-83b7-44e9-bc7f-786434426647\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.765174 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data\") pod \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.765192 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb\") pod \"d103abed-83b7-44e9-bc7f-786434426647\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.769005 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" (UID: "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.769102 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g" (OuterVolumeSpecName: "kube-api-access-wch7g") pod "d103abed-83b7-44e9-bc7f-786434426647" (UID: "d103abed-83b7-44e9-bc7f-786434426647"). InnerVolumeSpecName "kube-api-access-wch7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.786134 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp" (OuterVolumeSpecName: "kube-api-access-n4cjp") pod "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" (UID: "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18"). InnerVolumeSpecName "kube-api-access-n4cjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.808565 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d103abed-83b7-44e9-bc7f-786434426647" (UID: "d103abed-83b7-44e9-bc7f-786434426647"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.816577 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config" (OuterVolumeSpecName: "config") pod "d103abed-83b7-44e9-bc7f-786434426647" (UID: "d103abed-83b7-44e9-bc7f-786434426647"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.821174 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d103abed-83b7-44e9-bc7f-786434426647" (UID: "d103abed-83b7-44e9-bc7f-786434426647"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.821259 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" (UID: "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.826845 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d103abed-83b7-44e9-bc7f-786434426647" (UID: "d103abed-83b7-44e9-bc7f-786434426647"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.836082 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data" (OuterVolumeSpecName: "config-data") pod "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" (UID: "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866870 5136 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866900 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866911 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wch7g\" (UniqueName: \"kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866921 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866931 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866940 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866949 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866957 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4cjp\" (UniqueName: \"kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866964 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.098116 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: i/o timeout" Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.423699 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" event={"ID":"d103abed-83b7-44e9-bc7f-786434426647","Type":"ContainerDied","Data":"5e9c9cbaa5f048deb6cd641b68b15e9585b27cf0df0654e28450ce6a42c152c0"} Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.423723 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.425374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ldzkm" event={"ID":"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18","Type":"ContainerDied","Data":"aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf"} Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.425416 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf" Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.425412 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ldzkm" Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.471324 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.477848 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.936646 5136 scope.go:117] "RemoveContainer" containerID="604b652a792660f1238e2607b4242155d6fa3281d34ce55590b668cd26222f1b" Mar 20 07:12:21 crc kubenswrapper[5136]: E0320 07:12:21.962065 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b" Mar 20 07:12:21 crc kubenswrapper[5136]: E0320 07:12:21.962276 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4skhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-llt2h_openstack(2fc03366-82a1-4e30-a7e8-a06e16a8a14f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:12:21 crc kubenswrapper[5136]: E0320 07:12:21.963561 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-llt2h" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.089629 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:22 crc kubenswrapper[5136]: E0320 07:12:22.089967 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" containerName="glance-db-sync" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.089981 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" containerName="glance-db-sync" Mar 20 07:12:22 crc kubenswrapper[5136]: E0320 07:12:22.089993 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.090000 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" Mar 20 07:12:22 crc kubenswrapper[5136]: E0320 07:12:22.090009 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="init" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.090015 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="init" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.102684 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.102724 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" containerName="glance-db-sync" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.115627 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.094332 5136 scope.go:117] "RemoveContainer" containerID="533f92371dee2235f15d0d84ab9f13da275c7e919c6618b46cdf3ab8345571a9" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.155075 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.293882 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8c4\" (UniqueName: \"kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.294266 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.294306 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.294364 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.294391 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.294416 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397008 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397077 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397098 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397115 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397180 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8c4\" (UniqueName: \"kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397217 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.398090 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.398573 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.399087 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.399556 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.400241 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.423339 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8c4\" (UniqueName: \"kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.424317 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d103abed-83b7-44e9-bc7f-786434426647" path="/var/lib/kubelet/pods/d103abed-83b7-44e9-bc7f-786434426647/volumes" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.437187 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n6cqg" event={"ID":"61300b5b-7c36-4857-a0bf-631bf3cbb001","Type":"ContainerStarted","Data":"031d15e3c6d48fb60bf7992b603ae52f0ad57d2692789532d8ff3e43150b8a62"} Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.449760 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerStarted","Data":"7336863799fdb9fab29de80cd8bd1d394cbcbcf4fd209c912a6795c7665f330a"} Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.466200 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-n6cqg" podStartSLOduration=3.659201902 podStartE2EDuration="26.466148146s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="2026-03-20 07:11:57.7406974 +0000 UTC m=+1350.000008551" lastFinishedPulling="2026-03-20 07:12:20.547643644 +0000 UTC m=+1372.806954795" observedRunningTime="2026-03-20 07:12:22.460483351 +0000 UTC m=+1374.719794492" watchObservedRunningTime="2026-03-20 07:12:22.466148146 +0000 UTC m=+1374.725459297" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.473275 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6"} Mar 20 07:12:22 crc kubenswrapper[5136]: E0320 07:12:22.476761 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b\\\"\"" pod="openstack/cinder-db-sync-llt2h" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.494255 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.564665 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xztql"] Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.993872 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.995864 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.999071 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4q9lc" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.002061 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.002584 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.007167 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:23 crc kubenswrapper[5136]: W0320 07:12:23.016539 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec92c94f_350b_410f_af36_f232e43c51bc.slice/crio-23b1640d39e4ae1160a3dad399449ea185d7930c6aa60e3ac354c8577e44362b WatchSource:0}: Error finding container 23b1640d39e4ae1160a3dad399449ea185d7930c6aa60e3ac354c8577e44362b: Status 404 returned error can't find the container with id 23b1640d39e4ae1160a3dad399449ea185d7930c6aa60e3ac354c8577e44362b Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.039628 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.108028 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.108277 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.108886 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.108936 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.109004 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg6g6\" (UniqueName: \"kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.109202 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.109246 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219705 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219756 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219798 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219824 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219845 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg6g6\" (UniqueName: \"kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219882 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219981 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.221365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.225146 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.228108 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.231839 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.232704 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.240305 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg6g6\" (UniqueName: \"kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.254134 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.274347 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.275562 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.284194 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.287499 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423340 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423399 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423566 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl96d\" (UniqueName: \"kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423605 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423716 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423780 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423848 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.485545 5136 generic.go:334] "Generic (PLEG): container finished" podID="4ace6934-986e-463e-8e10-ea2d38d8657b" containerID="be9df9297d087d9b583ba3c8a236fca6fd4fd729e25496c50522e980d7021c09" exitCode=0 Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.485621 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" event={"ID":"4ace6934-986e-463e-8e10-ea2d38d8657b","Type":"ContainerDied","Data":"be9df9297d087d9b583ba3c8a236fca6fd4fd729e25496c50522e980d7021c09"} Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.486528 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.496700 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xztql" event={"ID":"72e22e43-fccc-4ee4-a170-8ff8b9959c1d","Type":"ContainerStarted","Data":"517443469d4fcf677c53f3f830f5a94c22bc034822199c2c81fb70956a791274"} Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.496745 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xztql" event={"ID":"72e22e43-fccc-4ee4-a170-8ff8b9959c1d","Type":"ContainerStarted","Data":"a9fa96c65fa536fe3aa83743698eb9e3fddbb0c4cc2f21d54eee2d77fd4acf9d"} Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.500884 5136 generic.go:334] "Generic (PLEG): container finished" podID="ec92c94f-350b-410f-af36-f232e43c51bc" containerID="2ae57c8c056dcb5d29e8273f89d2edf49540ec43cbd77b2ca4a8816f0f65f160" exitCode=0 Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.501330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" event={"ID":"ec92c94f-350b-410f-af36-f232e43c51bc","Type":"ContainerDied","Data":"2ae57c8c056dcb5d29e8273f89d2edf49540ec43cbd77b2ca4a8816f0f65f160"} Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.501355 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" event={"ID":"ec92c94f-350b-410f-af36-f232e43c51bc","Type":"ContainerStarted","Data":"23b1640d39e4ae1160a3dad399449ea185d7930c6aa60e3ac354c8577e44362b"} Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.524031 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xztql" podStartSLOduration=14.524008697 podStartE2EDuration="14.524008697s" podCreationTimestamp="2026-03-20 07:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:23.513499039 +0000 UTC m=+1375.772810180" watchObservedRunningTime="2026-03-20 07:12:23.524008697 +0000 UTC m=+1375.783319848" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525629 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525680 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525772 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl96d\" (UniqueName: \"kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525829 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525881 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525909 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525973 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.526850 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.526999 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.527212 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.529713 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.534643 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.549541 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl96d\" (UniqueName: \"kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.550206 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.563800 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.623435 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.164130 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.285700 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.510920 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" event={"ID":"ec92c94f-350b-410f-af36-f232e43c51bc","Type":"ContainerStarted","Data":"5b9f552fe91aa28f997b092d45d1079ab658cc3b43ebd7ab371c11c9bbbdd7ef"} Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.536200 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" podStartSLOduration=2.536181293 podStartE2EDuration="2.536181293s" podCreationTimestamp="2026-03-20 07:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:24.532487337 +0000 UTC m=+1376.791798498" watchObservedRunningTime="2026-03-20 07:12:24.536181293 +0000 UTC m=+1376.795492444" Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.921616 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.981724 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg925\" (UniqueName: \"kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925\") pod \"4ace6934-986e-463e-8e10-ea2d38d8657b\" (UID: \"4ace6934-986e-463e-8e10-ea2d38d8657b\") " Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.986831 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925" (OuterVolumeSpecName: "kube-api-access-xg925") pod "4ace6934-986e-463e-8e10-ea2d38d8657b" (UID: "4ace6934-986e-463e-8e10-ea2d38d8657b"). InnerVolumeSpecName "kube-api-access-xg925". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.083861 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg925\" (UniqueName: \"kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.476208 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.537321 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerStarted","Data":"c4d6db431c35fa9e07903364d5a4d9c0b236e4c6c9d596e00a75c84d0a2d318d"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.537363 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerStarted","Data":"5e2b1f562a9d2caa514e6b0a9c5b2c39146dfbd7683f095229f8f6fd7616c3a3"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.541774 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerStarted","Data":"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.541828 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerStarted","Data":"35333a6bcde4737efed42da72fdec9f87c5d7d96bc41af031566e8d824ff32aa"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.553041 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.556389 5136 generic.go:334] "Generic (PLEG): container finished" podID="61300b5b-7c36-4857-a0bf-631bf3cbb001" containerID="031d15e3c6d48fb60bf7992b603ae52f0ad57d2692789532d8ff3e43150b8a62" exitCode=0 Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.556457 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n6cqg" event={"ID":"61300b5b-7c36-4857-a0bf-631bf3cbb001","Type":"ContainerDied","Data":"031d15e3c6d48fb60bf7992b603ae52f0ad57d2692789532d8ff3e43150b8a62"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.558490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerStarted","Data":"cc654c94c6c668e03f2204d6c1f1eaaff73ba2c53ec4645dd69d881bd133a570"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.560621 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.560895 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" event={"ID":"4ace6934-986e-463e-8e10-ea2d38d8657b","Type":"ContainerDied","Data":"568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.560949 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f" Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.560975 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.976561 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566506-bbg6r"] Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.997396 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566506-bbg6r"] Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.413084 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3533ad-761e-45d8-8a1a-0e679b602e08" path="/var/lib/kubelet/pods/ca3533ad-761e-45d8-8a1a-0e679b602e08/volumes" Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.569833 5136 generic.go:334] "Generic (PLEG): container finished" podID="72e22e43-fccc-4ee4-a170-8ff8b9959c1d" containerID="517443469d4fcf677c53f3f830f5a94c22bc034822199c2c81fb70956a791274" exitCode=0 Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.569916 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xztql" event={"ID":"72e22e43-fccc-4ee4-a170-8ff8b9959c1d","Type":"ContainerDied","Data":"517443469d4fcf677c53f3f830f5a94c22bc034822199c2c81fb70956a791274"} Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.572374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerStarted","Data":"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90"} Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.572438 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-log" containerID="cri-o://8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" gracePeriod=30 Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.572475 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-httpd" containerID="cri-o://1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" gracePeriod=30 Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.573901 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jv7f9" event={"ID":"16f28a76-f7a5-4980-a693-7bd078f3c128","Type":"ContainerStarted","Data":"8f39b26d5a3a98eb4a0bd3688d06d7a44225eee0ee50099fc60fd5816beb4256"} Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.582778 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerStarted","Data":"8bd6ba13b920fd9d372e08641d2f4dfd1411247734ac4e2a17f92e8c7045515a"} Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.582943 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-log" containerID="cri-o://c4d6db431c35fa9e07903364d5a4d9c0b236e4c6c9d596e00a75c84d0a2d318d" gracePeriod=30 Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.582974 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-httpd" containerID="cri-o://8bd6ba13b920fd9d372e08641d2f4dfd1411247734ac4e2a17f92e8c7045515a" gracePeriod=30 Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.618206 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.618187331 podStartE2EDuration="5.618187331s" podCreationTimestamp="2026-03-20 07:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:26.611194274 +0000 UTC m=+1378.870505425" watchObservedRunningTime="2026-03-20 07:12:26.618187331 +0000 UTC m=+1378.877498482" Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.641467 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jv7f9" podStartSLOduration=2.992705396 podStartE2EDuration="30.641446656s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="2026-03-20 07:11:58.220389446 +0000 UTC m=+1350.479700597" lastFinishedPulling="2026-03-20 07:12:25.869130706 +0000 UTC m=+1378.128441857" observedRunningTime="2026-03-20 07:12:26.63290717 +0000 UTC m=+1378.892218321" watchObservedRunningTime="2026-03-20 07:12:26.641446656 +0000 UTC m=+1378.900757807" Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.657460 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.657426324 podStartE2EDuration="4.657426324s" podCreationTimestamp="2026-03-20 07:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:26.654104741 +0000 UTC m=+1378.913415892" watchObservedRunningTime="2026-03-20 07:12:26.657426324 +0000 UTC m=+1378.916737475" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.052658 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n6cqg" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.119569 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts\") pod \"61300b5b-7c36-4857-a0bf-631bf3cbb001\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.119650 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data\") pod \"61300b5b-7c36-4857-a0bf-631bf3cbb001\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.119780 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs\") pod \"61300b5b-7c36-4857-a0bf-631bf3cbb001\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.119832 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5bh\" (UniqueName: \"kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh\") pod \"61300b5b-7c36-4857-a0bf-631bf3cbb001\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.119850 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle\") pod \"61300b5b-7c36-4857-a0bf-631bf3cbb001\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.120474 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs" (OuterVolumeSpecName: "logs") pod "61300b5b-7c36-4857-a0bf-631bf3cbb001" (UID: "61300b5b-7c36-4857-a0bf-631bf3cbb001"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.126060 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh" (OuterVolumeSpecName: "kube-api-access-kt5bh") pod "61300b5b-7c36-4857-a0bf-631bf3cbb001" (UID: "61300b5b-7c36-4857-a0bf-631bf3cbb001"). InnerVolumeSpecName "kube-api-access-kt5bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.150261 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data" (OuterVolumeSpecName: "config-data") pod "61300b5b-7c36-4857-a0bf-631bf3cbb001" (UID: "61300b5b-7c36-4857-a0bf-631bf3cbb001"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.150636 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts" (OuterVolumeSpecName: "scripts") pod "61300b5b-7c36-4857-a0bf-631bf3cbb001" (UID: "61300b5b-7c36-4857-a0bf-631bf3cbb001"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.199691 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61300b5b-7c36-4857-a0bf-631bf3cbb001" (UID: "61300b5b-7c36-4857-a0bf-631bf3cbb001"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.222630 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.222740 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5bh\" (UniqueName: \"kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.222760 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.222793 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.222802 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.386223 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428439 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428505 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428556 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428652 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428723 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428871 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428949 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl96d\" (UniqueName: \"kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.431885 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs" (OuterVolumeSpecName: "logs") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.433513 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.436558 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.439754 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts" (OuterVolumeSpecName: "scripts") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.461183 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d" (OuterVolumeSpecName: "kube-api-access-sl96d") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "kube-api-access-sl96d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.480651 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.502298 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data" (OuterVolumeSpecName: "config-data") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534726 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534761 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534792 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534803 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl96d\" (UniqueName: \"kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534829 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534839 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534846 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.557998 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.601338 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n6cqg" event={"ID":"61300b5b-7c36-4857-a0bf-631bf3cbb001","Type":"ContainerDied","Data":"b5e8ad0f3ddd8fc1ff41409f21a655282a69b3d531e79a60604f206a303c07a7"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.601370 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5e8ad0f3ddd8fc1ff41409f21a655282a69b3d531e79a60604f206a303c07a7" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.601450 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n6cqg" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.605724 5136 generic.go:334] "Generic (PLEG): container finished" podID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerID="8bd6ba13b920fd9d372e08641d2f4dfd1411247734ac4e2a17f92e8c7045515a" exitCode=0 Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.605763 5136 generic.go:334] "Generic (PLEG): container finished" podID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerID="c4d6db431c35fa9e07903364d5a4d9c0b236e4c6c9d596e00a75c84d0a2d318d" exitCode=143 Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.605828 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerDied","Data":"8bd6ba13b920fd9d372e08641d2f4dfd1411247734ac4e2a17f92e8c7045515a"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.605862 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerDied","Data":"c4d6db431c35fa9e07903364d5a4d9c0b236e4c6c9d596e00a75c84d0a2d318d"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.614961 5136 generic.go:334] "Generic (PLEG): container finished" podID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerID="1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" exitCode=0 Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615205 5136 generic.go:334] "Generic (PLEG): container finished" podID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerID="8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" exitCode=143 Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615269 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerDied","Data":"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615302 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerDied","Data":"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615315 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerDied","Data":"35333a6bcde4737efed42da72fdec9f87c5d7d96bc41af031566e8d824ff32aa"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615334 5136 scope.go:117] "RemoveContainer" containerID="1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615478 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.632429 5136 generic.go:334] "Generic (PLEG): container finished" podID="4f5241dc-9fdc-4e75-9924-fb00a2e6119d" containerID="22bd79d8d32272633a42a92ee1e9e96d3d3259073a33ca0ea587ca787429e836" exitCode=0 Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.632618 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kxk7p" event={"ID":"4f5241dc-9fdc-4e75-9924-fb00a2e6119d","Type":"ContainerDied","Data":"22bd79d8d32272633a42a92ee1e9e96d3d3259073a33ca0ea587ca787429e836"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.636552 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.683765 5136 scope.go:117] "RemoveContainer" containerID="8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.699120 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.707925 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.722777 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.738563 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.738961 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.738978 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.738989 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.738997 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.739007 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61300b5b-7c36-4857-a0bf-631bf3cbb001" containerName="placement-db-sync" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739015 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="61300b5b-7c36-4857-a0bf-631bf3cbb001" containerName="placement-db-sync" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.739033 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739038 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.739049 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ace6934-986e-463e-8e10-ea2d38d8657b" containerName="oc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739055 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ace6934-986e-463e-8e10-ea2d38d8657b" containerName="oc" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.739067 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739072 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739258 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739270 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ace6934-986e-463e-8e10-ea2d38d8657b" containerName="oc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739279 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739289 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739299 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739308 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="61300b5b-7c36-4857-a0bf-631bf3cbb001" containerName="placement-db-sync" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739632 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg6g6\" (UniqueName: \"kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739663 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739704 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739722 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739737 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739758 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739862 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.740292 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs" (OuterVolumeSpecName: "logs") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.740397 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.741197 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.744796 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.745211 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6" (OuterVolumeSpecName: "kube-api-access-lg6g6") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "kube-api-access-lg6g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.745769 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.748384 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.749964 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.767511 5136 scope.go:117] "RemoveContainer" containerID="1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.775564 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90\": container with ID starting with 1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90 not found: ID does not exist" containerID="1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.775702 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90"} err="failed to get container status \"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90\": rpc error: code = NotFound desc = could not find container \"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90\": container with ID starting with 1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90 not found: ID does not exist" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.775730 5136 scope.go:117] "RemoveContainer" containerID="8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.776692 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0\": container with ID starting with 8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0 not found: ID does not exist" containerID="8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.776751 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0"} err="failed to get container status \"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0\": rpc error: code = NotFound desc = could not find container \"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0\": container with ID starting with 8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0 not found: ID does not exist" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.777920 5136 scope.go:117] "RemoveContainer" containerID="1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.778655 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90"} err="failed to get container status \"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90\": rpc error: code = NotFound desc = could not find container \"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90\": container with ID starting with 1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90 not found: ID does not exist" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.778697 5136 scope.go:117] "RemoveContainer" containerID="8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.780100 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0"} err="failed to get container status \"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0\": rpc error: code = NotFound desc = could not find container \"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0\": container with ID starting with 8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0 not found: ID does not exist" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.801391 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts" (OuterVolumeSpecName: "scripts") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.814286 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848234 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848365 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848413 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848434 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848580 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848600 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx28m\" (UniqueName: \"kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848646 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848959 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849062 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849076 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg6g6\" (UniqueName: \"kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849087 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849096 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849105 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849113 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.881801 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.883463 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.885710 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.886000 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4t29g" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.886136 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.886329 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.887252 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.892151 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.901843 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.935345 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data" (OuterVolumeSpecName: "config-data") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950503 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950545 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950580 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950761 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950848 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950876 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6wk\" (UniqueName: \"kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950928 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950967 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950993 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951032 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951055 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951141 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951164 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx28m\" (UniqueName: \"kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951211 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951295 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951377 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951398 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.952450 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.952670 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.962646 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.966950 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.969462 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.973572 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.990324 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.003884 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.008939 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx28m\" (UniqueName: \"kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053248 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053324 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053396 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6wk\" (UniqueName: \"kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053425 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053493 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.054049 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.057976 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.058422 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.059176 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.061703 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.062557 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.069990 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.071038 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6wk\" (UniqueName: \"kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.196576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.219197 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255429 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255503 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255562 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255608 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdg86\" (UniqueName: \"kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255676 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.263682 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.267341 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts" (OuterVolumeSpecName: "scripts") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.267489 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86" (OuterVolumeSpecName: "kube-api-access-wdg86") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "kube-api-access-wdg86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.267556 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.305547 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data" (OuterVolumeSpecName: "config-data") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.307039 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357872 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357901 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357911 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357921 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdg86\" (UniqueName: \"kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357930 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357963 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.413787 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" path="/var/lib/kubelet/pods/1769e60f-b60b-4a9d-aa9c-57773220f7c0/volumes" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.642652 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerDied","Data":"5e2b1f562a9d2caa514e6b0a9c5b2c39146dfbd7683f095229f8f6fd7616c3a3"} Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.642681 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.642711 5136 scope.go:117] "RemoveContainer" containerID="8bd6ba13b920fd9d372e08641d2f4dfd1411247734ac4e2a17f92e8c7045515a" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.645546 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xztql" event={"ID":"72e22e43-fccc-4ee4-a170-8ff8b9959c1d","Type":"ContainerDied","Data":"a9fa96c65fa536fe3aa83743698eb9e3fddbb0c4cc2f21d54eee2d77fd4acf9d"} Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.645579 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9fa96c65fa536fe3aa83743698eb9e3fddbb0c4cc2f21d54eee2d77fd4acf9d" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.645619 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.693310 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.705393 5136 scope.go:117] "RemoveContainer" containerID="c4d6db431c35fa9e07903364d5a4d9c0b236e4c6c9d596e00a75c84d0a2d318d" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.705999 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.717888 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.733264 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:28 crc kubenswrapper[5136]: E0320 07:12:28.733710 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e22e43-fccc-4ee4-a170-8ff8b9959c1d" containerName="keystone-bootstrap" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.733725 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e22e43-fccc-4ee4-a170-8ff8b9959c1d" containerName="keystone-bootstrap" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.733934 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e22e43-fccc-4ee4-a170-8ff8b9959c1d" containerName="keystone-bootstrap" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.734974 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.741171 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.741295 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 07:12:28 crc kubenswrapper[5136]: W0320 07:12:28.746201 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5249fb5b_8908_4b21_9ea3_28508854ce4a.slice/crio-c7adb5a2a9c06da1244ea87915af7e6cfa3d0ce95887a1e45fa3f12d10155ea2 WatchSource:0}: Error finding container c7adb5a2a9c06da1244ea87915af7e6cfa3d0ce95887a1e45fa3f12d10155ea2: Status 404 returned error can't find the container with id c7adb5a2a9c06da1244ea87915af7e6cfa3d0ce95887a1e45fa3f12d10155ea2 Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.763470 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:28 crc kubenswrapper[5136]: W0320 07:12:28.766694 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98f17780_5e89_47b5_a280_ff05d993aec1.slice/crio-5651bf9b6915e66b83ba1121283006e83d01461bec1d3f8053fa31cecb4a7017 WatchSource:0}: Error finding container 5651bf9b6915e66b83ba1121283006e83d01461bec1d3f8053fa31cecb4a7017: Status 404 returned error can't find the container with id 5651bf9b6915e66b83ba1121283006e83d01461bec1d3f8053fa31cecb4a7017 Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.785374 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.795177 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.796768 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.803250 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6hlpf" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.803826 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.804082 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.804293 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.804542 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.806876 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.818510 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.870596 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrqvz\" (UniqueName: \"kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.870653 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.870922 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.870990 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.871067 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.871157 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.871219 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.871369 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972763 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972843 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972885 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972907 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972942 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972959 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973000 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrqvz\" (UniqueName: \"kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973035 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973063 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-698x2\" (UniqueName: \"kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973086 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973105 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973150 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973166 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.974067 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.974332 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.974730 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.981487 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.981532 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.991904 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.002379 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrqvz\" (UniqueName: \"kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.010405 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.027026 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.064146 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074352 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074454 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-698x2\" (UniqueName: \"kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074482 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074501 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074517 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074546 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074586 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074634 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.081532 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.082607 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.082789 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.083236 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.087127 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.091790 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.097227 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.105452 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-698x2\" (UniqueName: \"kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.115749 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.229070 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.383355 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config\") pod \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.383497 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle\") pod \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.383544 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp59n\" (UniqueName: \"kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n\") pod \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.405130 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n" (OuterVolumeSpecName: "kube-api-access-fp59n") pod "4f5241dc-9fdc-4e75-9924-fb00a2e6119d" (UID: "4f5241dc-9fdc-4e75-9924-fb00a2e6119d"). InnerVolumeSpecName "kube-api-access-fp59n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.426162 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f5241dc-9fdc-4e75-9924-fb00a2e6119d" (UID: "4f5241dc-9fdc-4e75-9924-fb00a2e6119d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.429971 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config" (OuterVolumeSpecName: "config") pod "4f5241dc-9fdc-4e75-9924-fb00a2e6119d" (UID: "4f5241dc-9fdc-4e75-9924-fb00a2e6119d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.490442 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.490489 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp59n\" (UniqueName: \"kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.490500 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.563973 5136 scope.go:117] "RemoveContainer" containerID="6bbf8fa191e070a1c91f2d1ea94b4f26f7559168925696c34903d08a5a0065c5" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.676152 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kxk7p" event={"ID":"4f5241dc-9fdc-4e75-9924-fb00a2e6119d","Type":"ContainerDied","Data":"9c68bfb878007ca6b62fba813ddcba80cdee73ef5b85374234305710364f4b28"} Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.676187 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c68bfb878007ca6b62fba813ddcba80cdee73ef5b85374234305710364f4b28" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.676234 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.702626 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerStarted","Data":"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95"} Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.702962 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerStarted","Data":"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd"} Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.702973 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerStarted","Data":"5651bf9b6915e66b83ba1121283006e83d01461bec1d3f8053fa31cecb4a7017"} Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.703541 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.703562 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.720425 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerStarted","Data":"c7adb5a2a9c06da1244ea87915af7e6cfa3d0ce95887a1e45fa3f12d10155ea2"} Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.744521 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.745459 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f464f8686-f4nfl" podStartSLOduration=2.745443697 podStartE2EDuration="2.745443697s" podCreationTimestamp="2026-03-20 07:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:29.721527022 +0000 UTC m=+1381.980838173" watchObservedRunningTime="2026-03-20 07:12:29.745443697 +0000 UTC m=+1382.004754838" Mar 20 07:12:29 crc kubenswrapper[5136]: W0320 07:12:29.767241 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7a82425_91b7_43b8_b26e_ace42be9cdba.slice/crio-6c6e6f554a2daf084995a53a820c6c15f7723c013708ede47a5e369f225ae2a2 WatchSource:0}: Error finding container 6c6e6f554a2daf084995a53a820c6c15f7723c013708ede47a5e369f225ae2a2: Status 404 returned error can't find the container with id 6c6e6f554a2daf084995a53a820c6c15f7723c013708ede47a5e369f225ae2a2 Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.826772 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.012469 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.012683 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="dnsmasq-dns" containerID="cri-o://5b9f552fe91aa28f997b092d45d1079ab658cc3b43ebd7ab371c11c9bbbdd7ef" gracePeriod=10 Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.018129 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.054291 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:30 crc kubenswrapper[5136]: E0320 07:12:30.054867 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5241dc-9fdc-4e75-9924-fb00a2e6119d" containerName="neutron-db-sync" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.054878 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5241dc-9fdc-4e75-9924-fb00a2e6119d" containerName="neutron-db-sync" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.055054 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5241dc-9fdc-4e75-9924-fb00a2e6119d" containerName="neutron-db-sync" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.057590 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.075645 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.077891 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.084827 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.085033 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-scqxf" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.085185 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.085353 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.111371 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.122204 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209037 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209127 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209151 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209169 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209195 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209219 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209279 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hrd4\" (UniqueName: \"kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209308 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209332 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45w7v\" (UniqueName: \"kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209370 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310804 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310855 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hrd4\" (UniqueName: \"kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310882 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45w7v\" (UniqueName: \"kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310930 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310945 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310997 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.311015 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.311032 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.311052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.311081 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.312471 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.313516 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.314635 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.315392 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.321425 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.328235 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.332853 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.335774 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.338045 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hrd4\" (UniqueName: \"kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.338518 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45w7v\" (UniqueName: \"kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.340330 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.410951 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" path="/var/lib/kubelet/pods/ac3187ac-eebe-4584-9624-e4127b6ee040/volumes" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.521800 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.606104 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.746405 5136 generic.go:334] "Generic (PLEG): container finished" podID="16f28a76-f7a5-4980-a693-7bd078f3c128" containerID="8f39b26d5a3a98eb4a0bd3688d06d7a44225eee0ee50099fc60fd5816beb4256" exitCode=0 Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.746561 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jv7f9" event={"ID":"16f28a76-f7a5-4980-a693-7bd078f3c128","Type":"ContainerDied","Data":"8f39b26d5a3a98eb4a0bd3688d06d7a44225eee0ee50099fc60fd5816beb4256"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.748484 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766d94c967-pb9qd" event={"ID":"fab90141-26b4-4e46-a916-82190508d6e8","Type":"ContainerStarted","Data":"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.748523 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766d94c967-pb9qd" event={"ID":"fab90141-26b4-4e46-a916-82190508d6e8","Type":"ContainerStarted","Data":"c19785656f47dd95cc1a27542636229f68d56209966c28654c4de9baa2a90613"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.748708 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.752242 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerStarted","Data":"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.755975 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerStarted","Data":"6c6e6f554a2daf084995a53a820c6c15f7723c013708ede47a5e369f225ae2a2"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.759978 5136 generic.go:334] "Generic (PLEG): container finished" podID="ec92c94f-350b-410f-af36-f232e43c51bc" containerID="5b9f552fe91aa28f997b092d45d1079ab658cc3b43ebd7ab371c11c9bbbdd7ef" exitCode=0 Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.761718 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" event={"ID":"ec92c94f-350b-410f-af36-f232e43c51bc","Type":"ContainerDied","Data":"5b9f552fe91aa28f997b092d45d1079ab658cc3b43ebd7ab371c11c9bbbdd7ef"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.793678 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-766d94c967-pb9qd" podStartSLOduration=2.793663077 podStartE2EDuration="2.793663077s" podCreationTimestamp="2026-03-20 07:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:30.783622304 +0000 UTC m=+1383.042933455" watchObservedRunningTime="2026-03-20 07:12:30.793663077 +0000 UTC m=+1383.052974228" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.110044 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.327339 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443062 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443101 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443138 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443207 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443271 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd8c4\" (UniqueName: \"kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443339 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.453696 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4" (OuterVolumeSpecName: "kube-api-access-vd8c4") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "kube-api-access-vd8c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.479989 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.496489 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.517356 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.518068 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.520294 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config" (OuterVolumeSpecName: "config") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.521952 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.545949 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd8c4\" (UniqueName: \"kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.545978 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.545989 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.545999 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.546009 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.546026 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.773243 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" event={"ID":"ec92c94f-350b-410f-af36-f232e43c51bc","Type":"ContainerDied","Data":"23b1640d39e4ae1160a3dad399449ea185d7930c6aa60e3ac354c8577e44362b"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.773607 5136 scope.go:117] "RemoveContainer" containerID="5b9f552fe91aa28f997b092d45d1079ab658cc3b43ebd7ab371c11c9bbbdd7ef" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.773423 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.777754 5136 generic.go:334] "Generic (PLEG): container finished" podID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerID="9601c3b5171c0ad2c4a37d9af2b6800e2a7b9ef5252a7eafd6ffba3913617d30" exitCode=0 Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.777799 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" event={"ID":"8c492e9e-5703-4622-bcb8-6d77327cd1af","Type":"ContainerDied","Data":"9601c3b5171c0ad2c4a37d9af2b6800e2a7b9ef5252a7eafd6ffba3913617d30"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.777831 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" event={"ID":"8c492e9e-5703-4622-bcb8-6d77327cd1af","Type":"ContainerStarted","Data":"e8f7d39c70d06f5ebbf7c96df259d250580edf42a70ced6ee3a41c2d6954fc88"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.782653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerStarted","Data":"0955f2ff6e58a181eb4657826df44412140ece0a092f87584721929f1c23cd5d"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.782687 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerStarted","Data":"0d813176fbff380f2ecf1396ef58dbd6653c9f7fc00b3a5aa2671557b6efffb1"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.787014 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerStarted","Data":"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.840041 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.840019698 podStartE2EDuration="4.840019698s" podCreationTimestamp="2026-03-20 07:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:31.823792113 +0000 UTC m=+1384.083103274" watchObservedRunningTime="2026-03-20 07:12:31.840019698 +0000 UTC m=+1384.099330849" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.853978 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.860937 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.866976 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.866957528 podStartE2EDuration="3.866957528s" podCreationTimestamp="2026-03-20 07:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:31.865967567 +0000 UTC m=+1384.125278738" watchObservedRunningTime="2026-03-20 07:12:31.866957528 +0000 UTC m=+1384.126268679" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.131531 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:12:32 crc kubenswrapper[5136]: E0320 07:12:32.132304 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="init" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.132315 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="init" Mar 20 07:12:32 crc kubenswrapper[5136]: E0320 07:12:32.132342 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="dnsmasq-dns" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.132348 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="dnsmasq-dns" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.132506 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="dnsmasq-dns" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.141066 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.141169 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261347 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261415 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261437 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261490 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261522 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktsr8\" (UniqueName: \"kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261554 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261606 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.362858 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.362912 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktsr8\" (UniqueName: \"kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.362948 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.362996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.363020 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.363056 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.363078 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.363528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.369113 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.369896 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.373997 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.374658 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.375594 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.382397 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktsr8\" (UniqueName: \"kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.406607 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" path="/var/lib/kubelet/pods/ec92c94f-350b-410f-af36-f232e43c51bc/volumes" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.511905 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.760898 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.762915 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.767136 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.767359 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.786206 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870389 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870426 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870473 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870578 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870601 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj27q\" (UniqueName: \"kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870648 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.972911 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973265 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973321 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973401 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj27q\" (UniqueName: \"kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973469 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973517 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.983219 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.983289 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.984987 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.989472 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.996913 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.999157 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:33 crc kubenswrapper[5136]: I0320 07:12:33.001003 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj27q\" (UniqueName: \"kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:33 crc kubenswrapper[5136]: I0320 07:12:33.098954 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.593640 5136 scope.go:117] "RemoveContainer" containerID="2ae57c8c056dcb5d29e8273f89d2edf49540ec43cbd77b2ca4a8816f0f65f160" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.749050 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.810452 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s9rd\" (UniqueName: \"kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd\") pod \"16f28a76-f7a5-4980-a693-7bd078f3c128\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.810784 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle\") pod \"16f28a76-f7a5-4980-a693-7bd078f3c128\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.810878 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data\") pod \"16f28a76-f7a5-4980-a693-7bd078f3c128\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.814108 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd" (OuterVolumeSpecName: "kube-api-access-6s9rd") pod "16f28a76-f7a5-4980-a693-7bd078f3c128" (UID: "16f28a76-f7a5-4980-a693-7bd078f3c128"). InnerVolumeSpecName "kube-api-access-6s9rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.816840 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "16f28a76-f7a5-4980-a693-7bd078f3c128" (UID: "16f28a76-f7a5-4980-a693-7bd078f3c128"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.828066 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.828080 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jv7f9" event={"ID":"16f28a76-f7a5-4980-a693-7bd078f3c128","Type":"ContainerDied","Data":"a6c0c7cbe316e747e497558676af55967b6ed940767c0667d59d4da80f64920a"} Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.828113 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6c0c7cbe316e747e497558676af55967b6ed940767c0667d59d4da80f64920a" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.830088 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerStarted","Data":"182398618c3a1531c9ad080ffbd4a768caa0163ecc71bb0c12e322558e13d0bb"} Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.880994 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16f28a76-f7a5-4980-a693-7bd078f3c128" (UID: "16f28a76-f7a5-4980-a693-7bd078f3c128"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.918914 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s9rd\" (UniqueName: \"kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.918941 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.918950 5136 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.104470 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.337134 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:12:35 crc kubenswrapper[5136]: W0320 07:12:35.348082 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c52887a_70a8_4d00_a1f9_a5677fa48d1f.slice/crio-6635a1786b5854425a3f89e3fd4433884c9eeba6fdc2878722b9acdad452ee38 WatchSource:0}: Error finding container 6635a1786b5854425a3f89e3fd4433884c9eeba6fdc2878722b9acdad452ee38: Status 404 returned error can't find the container with id 6635a1786b5854425a3f89e3fd4433884c9eeba6fdc2878722b9acdad452ee38 Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.842096 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerStarted","Data":"445411040c2cad21dcc73efad8a46c4699ac626429795c9aa6391147eec609d7"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.846095 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerStarted","Data":"024fcc0cd809e83faec73e5ba56c99a83ab40c7bc4ad09d07aea8ace13ee29fb"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.846133 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerStarted","Data":"df57270fa341245294b8409621c8d255f8f17bac5716eb17c148b55857569799"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.846913 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.848035 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-llt2h" event={"ID":"2fc03366-82a1-4e30-a7e8-a06e16a8a14f","Type":"ContainerStarted","Data":"9cff72e5160a41a8305e76e0221624a76437830286641f5e18a9ed4e7ae3e23a"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.850924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerStarted","Data":"605e2f1b6fdab04852864ae8ba9a1933cc6fbe478b172080fd10f5d23b52f0fe"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.850952 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerStarted","Data":"14f94b6d1dd07b874e83aed25b1716c42ede7203afe8fc38064921b976f5c65d"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.850963 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerStarted","Data":"456674fb963104b875873a874337c20143adc46f3a809c5e2ae04c7d773c4641"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.851492 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.851519 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.856426 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" event={"ID":"8c492e9e-5703-4622-bcb8-6d77327cd1af","Type":"ContainerStarted","Data":"8e1ca9a5a655afee1ca95c3cf5addf49787cbb523449118713907bd8465f184a"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.857005 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.867486 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerStarted","Data":"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.867567 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerStarted","Data":"6635a1786b5854425a3f89e3fd4433884c9eeba6fdc2878722b9acdad452ee38"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.875566 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-564b95fd68-m2j52" podStartSLOduration=5.875548372 podStartE2EDuration="5.875548372s" podCreationTimestamp="2026-03-20 07:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:35.864420505 +0000 UTC m=+1388.123731656" watchObservedRunningTime="2026-03-20 07:12:35.875548372 +0000 UTC m=+1388.134859523" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.886628 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-llt2h" podStartSLOduration=2.874945414 podStartE2EDuration="39.886610637s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="2026-03-20 07:11:57.750745626 +0000 UTC m=+1350.010056777" lastFinishedPulling="2026-03-20 07:12:34.762410849 +0000 UTC m=+1387.021722000" observedRunningTime="2026-03-20 07:12:35.881680712 +0000 UTC m=+1388.140991873" watchObservedRunningTime="2026-03-20 07:12:35.886610637 +0000 UTC m=+1388.145921788" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.904650 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" podStartSLOduration=6.904634038 podStartE2EDuration="6.904634038s" podCreationTimestamp="2026-03-20 07:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:35.904587657 +0000 UTC m=+1388.163898808" watchObservedRunningTime="2026-03-20 07:12:35.904634038 +0000 UTC m=+1388.163945179" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.943888 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-dc8db4fdb-hpjdg" podStartSLOduration=3.938624617 podStartE2EDuration="3.938624617s" podCreationTimestamp="2026-03-20 07:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:35.926106558 +0000 UTC m=+1388.185417709" watchObservedRunningTime="2026-03-20 07:12:35.938624617 +0000 UTC m=+1388.197935768" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.100809 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:12:36 crc kubenswrapper[5136]: E0320 07:12:36.101241 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" containerName="barbican-db-sync" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.101255 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" containerName="barbican-db-sync" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.101475 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" containerName="barbican-db-sync" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.102655 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.120785 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.121026 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tz5pc" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.121613 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.143127 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.144614 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.147120 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.159700 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.159784 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctrhs\" (UniqueName: \"kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.159836 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.159862 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.159895 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.188224 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.206670 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.265580 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275375 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275455 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnkzp\" (UniqueName: \"kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275475 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275503 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275564 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctrhs\" (UniqueName: \"kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275584 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275618 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275635 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275649 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275677 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.277142 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.282022 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.283335 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.283505 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.301419 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctrhs\" (UniqueName: \"kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.314457 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.316102 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.335966 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.364851 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.376913 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnkzp\" (UniqueName: \"kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.376959 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.377026 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.377060 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.377078 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.379649 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.381112 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.381222 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.382433 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.389750 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.390446 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.392517 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.401633 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnkzp\" (UniqueName: \"kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.476193 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.490635 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491141 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62m5b\" (UniqueName: \"kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491180 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx2gt\" (UniqueName: \"kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491202 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491234 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491291 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491313 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491342 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491359 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491377 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491401 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.492156 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595236 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62m5b\" (UniqueName: \"kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595356 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx2gt\" (UniqueName: \"kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595374 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595404 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595466 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595484 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595526 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595539 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595557 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595598 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.596884 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.597488 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.598968 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.601501 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.602438 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.603343 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.618067 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.622741 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.622885 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.626918 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx2gt\" (UniqueName: \"kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.635542 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62m5b\" (UniqueName: \"kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.650422 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.716957 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.916801 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerStarted","Data":"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685"} Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.918993 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.936086 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6ff4f58fb9-7gtff" podStartSLOduration=4.936069815 podStartE2EDuration="4.936069815s" podCreationTimestamp="2026-03-20 07:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:36.933016099 +0000 UTC m=+1389.192327250" watchObservedRunningTime="2026-03-20 07:12:36.936069815 +0000 UTC m=+1389.195380966" Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.083891 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.102078 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:12:37 crc kubenswrapper[5136]: W0320 07:12:37.106493 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a59ab3d_3094_4e10_bbde_44479696f752.slice/crio-adeda094c452ea454b57acc36b81655df6fbdea86bb257845c27b0e1e0656a6f WatchSource:0}: Error finding container adeda094c452ea454b57acc36b81655df6fbdea86bb257845c27b0e1e0656a6f: Status 404 returned error can't find the container with id adeda094c452ea454b57acc36b81655df6fbdea86bb257845c27b0e1e0656a6f Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.321151 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.339679 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:37 crc kubenswrapper[5136]: W0320 07:12:37.365213 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e6c911d_6da1_440a_8d63_d61e68b0272c.slice/crio-f5741bc42c68687793c23bf8ad98b077172ee8bf2f31364fb5498b69a4e6d1bb WatchSource:0}: Error finding container f5741bc42c68687793c23bf8ad98b077172ee8bf2f31364fb5498b69a4e6d1bb: Status 404 returned error can't find the container with id f5741bc42c68687793c23bf8ad98b077172ee8bf2f31364fb5498b69a4e6d1bb Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.940448 5136 generic.go:334] "Generic (PLEG): container finished" podID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerID="e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e" exitCode=0 Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.940891 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" event={"ID":"3e6c911d-6da1-440a-8d63-d61e68b0272c","Type":"ContainerDied","Data":"e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.940924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" event={"ID":"3e6c911d-6da1-440a-8d63-d61e68b0272c","Type":"ContainerStarted","Data":"f5741bc42c68687793c23bf8ad98b077172ee8bf2f31364fb5498b69a4e6d1bb"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.950765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerStarted","Data":"87dc6bb8fc1b9abd24b71389abdb4a22e7af9a9d787041070ce4c3a66cfdd142"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.963043 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerStarted","Data":"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.963098 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerStarted","Data":"906ac955356980032ab967bfee58aa1175878c2109f34d9bbc6cfcc9e1a56ade"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.976241 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerStarted","Data":"adeda094c452ea454b57acc36b81655df6fbdea86bb257845c27b0e1e0656a6f"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.976871 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="dnsmasq-dns" containerID="cri-o://8e1ca9a5a655afee1ca95c3cf5addf49787cbb523449118713907bd8465f184a" gracePeriod=10 Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.070991 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.071040 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.133640 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.151268 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.750759 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.755575 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.761584 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.762007 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.772076 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786382 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786488 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786541 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786715 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786769 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzlq9\" (UniqueName: \"kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786795 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.887740 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.887787 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.887890 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.887947 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.887987 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.888027 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlq9\" (UniqueName: \"kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.888049 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.888567 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.895449 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.895622 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.896627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.899447 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.916837 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzlq9\" (UniqueName: \"kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.924320 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.995099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerStarted","Data":"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf"} Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.996296 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.996328 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.003923 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" event={"ID":"3e6c911d-6da1-440a-8d63-d61e68b0272c","Type":"ContainerStarted","Data":"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd"} Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.004766 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.009066 5136 generic.go:334] "Generic (PLEG): container finished" podID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerID="8e1ca9a5a655afee1ca95c3cf5addf49787cbb523449118713907bd8465f184a" exitCode=0 Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.010718 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" event={"ID":"8c492e9e-5703-4622-bcb8-6d77327cd1af","Type":"ContainerDied","Data":"8e1ca9a5a655afee1ca95c3cf5addf49787cbb523449118713907bd8465f184a"} Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.010750 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.011348 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.023932 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d86fb98dd-76pm8" podStartSLOduration=3.023911645 podStartE2EDuration="3.023911645s" podCreationTimestamp="2026-03-20 07:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:39.012858301 +0000 UTC m=+1391.272169452" watchObservedRunningTime="2026-03-20 07:12:39.023911645 +0000 UTC m=+1391.283222796" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.050900 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" podStartSLOduration=3.050883226 podStartE2EDuration="3.050883226s" podCreationTimestamp="2026-03-20 07:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:39.044040283 +0000 UTC m=+1391.303351434" watchObservedRunningTime="2026-03-20 07:12:39.050883226 +0000 UTC m=+1391.310194377" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.064873 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.065166 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.080936 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.082946 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.139071 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.140023 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192236 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192327 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192423 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192461 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hrd4\" (UniqueName: \"kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192545 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192579 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.214014 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4" (OuterVolumeSpecName: "kube-api-access-8hrd4") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "kube-api-access-8hrd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.250303 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.251324 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.252210 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config" (OuterVolumeSpecName: "config") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.295316 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.301865 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.307849 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.308002 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hrd4\" (UniqueName: \"kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.316206 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.320972 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.410314 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.410358 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.033484 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" event={"ID":"8c492e9e-5703-4622-bcb8-6d77327cd1af","Type":"ContainerDied","Data":"e8f7d39c70d06f5ebbf7c96df259d250580edf42a70ced6ee3a41c2d6954fc88"} Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.034187 5136 scope.go:117] "RemoveContainer" containerID="8e1ca9a5a655afee1ca95c3cf5addf49787cbb523449118713907bd8465f184a" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.033578 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.038479 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.038704 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.104082 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.111915 5136 scope.go:117] "RemoveContainer" containerID="9601c3b5171c0ad2c4a37d9af2b6800e2a7b9ef5252a7eafd6ffba3913617d30" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.115758 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.185002 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:12:40 crc kubenswrapper[5136]: W0320 07:12:40.216871 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2bf7a9d_44f9_407f_8a6c_6bc56ddde30b.slice/crio-a17e5911dd44a70eaddf965e56b21ab05149f56b96de7f20bf6f4c657c514884 WatchSource:0}: Error finding container a17e5911dd44a70eaddf965e56b21ab05149f56b96de7f20bf6f4c657c514884: Status 404 returned error can't find the container with id a17e5911dd44a70eaddf965e56b21ab05149f56b96de7f20bf6f4c657c514884 Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.410508 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" path="/var/lib/kubelet/pods/8c492e9e-5703-4622-bcb8-6d77327cd1af/volumes" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.051246 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerStarted","Data":"afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.051579 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerStarted","Data":"cd65718bfac09f4d934fe1bf3f629f5d852e12343f9a1b480d6984e7497c79aa"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.058906 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.060333 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerStarted","Data":"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.060375 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerStarted","Data":"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.060384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerStarted","Data":"a17e5911dd44a70eaddf965e56b21ab05149f56b96de7f20bf6f4c657c514884"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.060499 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.064412 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerStarted","Data":"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.064442 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerStarted","Data":"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.073801 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" podStartSLOduration=2.44589167 podStartE2EDuration="5.073785313s" podCreationTimestamp="2026-03-20 07:12:36 +0000 UTC" firstStartedPulling="2026-03-20 07:12:37.110877912 +0000 UTC m=+1389.370189063" lastFinishedPulling="2026-03-20 07:12:39.738771555 +0000 UTC m=+1391.998082706" observedRunningTime="2026-03-20 07:12:41.066450315 +0000 UTC m=+1393.325761466" watchObservedRunningTime="2026-03-20 07:12:41.073785313 +0000 UTC m=+1393.333096464" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.076684 5136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.110210 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-78df67c79-bqz8t" podStartSLOduration=2.499621845 podStartE2EDuration="5.110192958s" podCreationTimestamp="2026-03-20 07:12:36 +0000 UTC" firstStartedPulling="2026-03-20 07:12:37.103287556 +0000 UTC m=+1389.362598707" lastFinishedPulling="2026-03-20 07:12:39.713858659 +0000 UTC m=+1391.973169820" observedRunningTime="2026-03-20 07:12:41.109150915 +0000 UTC m=+1393.368462066" watchObservedRunningTime="2026-03-20 07:12:41.110192958 +0000 UTC m=+1393.369504109" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.136982 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-64845646dd-wf28v" podStartSLOduration=3.136952151 podStartE2EDuration="3.136952151s" podCreationTimestamp="2026-03-20 07:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:41.129658504 +0000 UTC m=+1393.388969655" watchObservedRunningTime="2026-03-20 07:12:41.136952151 +0000 UTC m=+1393.396263302" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.221250 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.088110 5136 generic.go:334] "Generic (PLEG): container finished" podID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" containerID="9cff72e5160a41a8305e76e0221624a76437830286641f5e18a9ed4e7ae3e23a" exitCode=0 Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.089205 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-llt2h" event={"ID":"2fc03366-82a1-4e30-a7e8-a06e16a8a14f","Type":"ContainerDied","Data":"9cff72e5160a41a8305e76e0221624a76437830286641f5e18a9ed4e7ae3e23a"} Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.090216 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.538326 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.538415 5136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.615734 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 07:12:45 crc kubenswrapper[5136]: I0320 07:12:45.996851 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-llt2h" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051537 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051619 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051694 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051794 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051839 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4skhx\" (UniqueName: \"kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051863 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.052085 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.053312 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.056282 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx" (OuterVolumeSpecName: "kube-api-access-4skhx") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "kube-api-access-4skhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.057027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts" (OuterVolumeSpecName: "scripts") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.057868 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.088966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.107664 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data" (OuterVolumeSpecName: "config-data") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.126024 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-llt2h" event={"ID":"2fc03366-82a1-4e30-a7e8-a06e16a8a14f","Type":"ContainerDied","Data":"5200ac7ba7db438f0d107516ee128664a59b5fde08b29a5ce98e40e534824c47"} Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.126070 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5200ac7ba7db438f0d107516ee128664a59b5fde08b29a5ce98e40e534824c47" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.126126 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-llt2h" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.155340 5136 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.155381 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.155390 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4skhx\" (UniqueName: \"kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.155399 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.155407 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.653012 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.732801 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.733063 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="dnsmasq-dns" containerID="cri-o://60532e8d3b2de1260b02b04d62ab8b4d0eed7842744135d5425179cf256cd7d4" gracePeriod=10 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.159044 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerStarted","Data":"306693c3fd00ea7d6ce01b03a7c8a7af984cbfe8170200b7df46fd72de431115"} Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.159661 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-central-agent" containerID="cri-o://7336863799fdb9fab29de80cd8bd1d394cbcbcf4fd209c912a6795c7665f330a" gracePeriod=30 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.159921 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="sg-core" containerID="cri-o://445411040c2cad21dcc73efad8a46c4699ac626429795c9aa6391147eec609d7" gracePeriod=30 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.159951 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.160013 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="proxy-httpd" containerID="cri-o://306693c3fd00ea7d6ce01b03a7c8a7af984cbfe8170200b7df46fd72de431115" gracePeriod=30 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.160026 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-notification-agent" containerID="cri-o://cc654c94c6c668e03f2204d6c1f1eaaff73ba2c53ec4645dd69d881bd133a570" gracePeriod=30 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.177785 5136 generic.go:334] "Generic (PLEG): container finished" podID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerID="60532e8d3b2de1260b02b04d62ab8b4d0eed7842744135d5425179cf256cd7d4" exitCode=0 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.177868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" event={"ID":"98a77e70-cc82-4a51-8475-d003a0ccf43e","Type":"ContainerDied","Data":"60532e8d3b2de1260b02b04d62ab8b4d0eed7842744135d5425179cf256cd7d4"} Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.196069 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.19249171 podStartE2EDuration="51.196051304s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="2026-03-20 07:11:58.00598099 +0000 UTC m=+1350.265292141" lastFinishedPulling="2026-03-20 07:12:46.009540574 +0000 UTC m=+1398.268851735" observedRunningTime="2026-03-20 07:12:47.187215908 +0000 UTC m=+1399.446527069" watchObservedRunningTime="2026-03-20 07:12:47.196051304 +0000 UTC m=+1399.455362445" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.280844 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:12:47 crc kubenswrapper[5136]: E0320 07:12:47.281194 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.281206 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: E0320 07:12:47.281227 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="init" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.281232 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="init" Mar 20 07:12:47 crc kubenswrapper[5136]: E0320 07:12:47.281255 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" containerName="cinder-db-sync" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.281261 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" containerName="cinder-db-sync" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.281431 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.281452 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" containerName="cinder-db-sync" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.282519 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.289778 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ps866" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.290055 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.290413 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.290906 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.300543 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.348594 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.356085 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.365270 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.369163 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.377645 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.377726 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.377855 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhh2\" (UniqueName: \"kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.377920 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.377959 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.378006 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481445 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481521 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481565 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481654 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481673 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481695 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7dlp\" (UniqueName: \"kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481900 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6zh\" (UniqueName: \"kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481933 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481952 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482000 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzhh2\" (UniqueName: \"kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482057 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482110 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482142 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482170 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482187 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482202 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482222 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.495326 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.509181 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp" (OuterVolumeSpecName: "kube-api-access-n7dlp") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "kube-api-access-n7dlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.513906 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.521570 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.521907 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.540048 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:47 crc kubenswrapper[5136]: E0320 07:12:47.541921 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="init" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.541941 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="init" Mar 20 07:12:47 crc kubenswrapper[5136]: E0320 07:12:47.541966 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.541976 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.542311 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.543777 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.547312 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.548016 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.556681 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzhh2\" (UniqueName: \"kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.577446 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584434 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584549 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584626 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584655 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584796 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584912 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584961 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq6zh\" (UniqueName: \"kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584988 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585070 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585176 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585205 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz48n\" (UniqueName: \"kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585334 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585457 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585470 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7dlp\" (UniqueName: \"kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.586305 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.587183 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.589420 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.590373 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.594230 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.599221 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config" (OuterVolumeSpecName: "config") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.599261 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.609776 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq6zh\" (UniqueName: \"kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.619299 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.621404 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.637810 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687264 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687318 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687392 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz48n\" (UniqueName: \"kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687434 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687466 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687499 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687519 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687567 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687578 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687587 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687595 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687977 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.691787 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.692100 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.692391 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.694683 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.697422 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.703824 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.704078 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.723442 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz48n\" (UniqueName: \"kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.886788 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.158773 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:12:48 crc kubenswrapper[5136]: W0320 07:12:48.162952 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod200895ec_fcf9_436d_82d3_c26c198e1485.slice/crio-1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d WatchSource:0}: Error finding container 1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d: Status 404 returned error can't find the container with id 1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.203781 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff72278d-b5e7-427b-8581-52ff89c57176" containerID="306693c3fd00ea7d6ce01b03a7c8a7af984cbfe8170200b7df46fd72de431115" exitCode=0 Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.203944 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff72278d-b5e7-427b-8581-52ff89c57176" containerID="445411040c2cad21dcc73efad8a46c4699ac626429795c9aa6391147eec609d7" exitCode=2 Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.203999 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff72278d-b5e7-427b-8581-52ff89c57176" containerID="7336863799fdb9fab29de80cd8bd1d394cbcbcf4fd209c912a6795c7665f330a" exitCode=0 Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.203885 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerDied","Data":"306693c3fd00ea7d6ce01b03a7c8a7af984cbfe8170200b7df46fd72de431115"} Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.204233 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerDied","Data":"445411040c2cad21dcc73efad8a46c4699ac626429795c9aa6391147eec609d7"} Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.204300 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerDied","Data":"7336863799fdb9fab29de80cd8bd1d394cbcbcf4fd209c912a6795c7665f330a"} Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.212350 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" event={"ID":"200895ec-fcf9-436d-82d3-c26c198e1485","Type":"ContainerStarted","Data":"1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d"} Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.229038 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" event={"ID":"98a77e70-cc82-4a51-8475-d003a0ccf43e","Type":"ContainerDied","Data":"b87a47428636e1da6de88425ef519d1313ec7d6e857d9dadf3b1f683fef7c84b"} Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.229084 5136 scope.go:117] "RemoveContainer" containerID="60532e8d3b2de1260b02b04d62ab8b4d0eed7842744135d5425179cf256cd7d4" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.229267 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.267608 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.326436 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.335132 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.339564 5136 scope.go:117] "RemoveContainer" containerID="d619f45ef012eb3e909a4c91d853e16f0aac41aa3f7c34e99d85f79a8050ee1a" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.421074 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" path="/var/lib/kubelet/pods/98a77e70-cc82-4a51-8475-d003a0ccf43e/volumes" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.431498 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.448314 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.579213 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.272014 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff72278d-b5e7-427b-8581-52ff89c57176" containerID="cc654c94c6c668e03f2204d6c1f1eaaff73ba2c53ec4645dd69d881bd133a570" exitCode=0 Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.272469 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerDied","Data":"cc654c94c6c668e03f2204d6c1f1eaaff73ba2c53ec4645dd69d881bd133a570"} Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.293142 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerStarted","Data":"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a"} Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.293181 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerStarted","Data":"91ca9f6487866d4c1ccc97b9b95931b4963d53b6c7bb374292c03d89eecac13f"} Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.298083 5136 generic.go:334] "Generic (PLEG): container finished" podID="200895ec-fcf9-436d-82d3-c26c198e1485" containerID="3e0d0bab07ba893f2ec5b9f186f6e1ac58691443de33a6064347527effa3dc1f" exitCode=0 Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.298199 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" event={"ID":"200895ec-fcf9-436d-82d3-c26c198e1485","Type":"ContainerDied","Data":"3e0d0bab07ba893f2ec5b9f186f6e1ac58691443de33a6064347527effa3dc1f"} Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.343933 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerStarted","Data":"044038db3dbd29add28a6376877ab76e790fddb6ebde5915cf5bb01e87957d6b"} Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.698654 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743634 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743692 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743750 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743856 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mmcd\" (UniqueName: \"kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743967 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743990 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.744742 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.745087 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.756947 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts" (OuterVolumeSpecName: "scripts") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.766706 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd" (OuterVolumeSpecName: "kube-api-access-6mmcd") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "kube-api-access-6mmcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.785473 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.846347 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.846371 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.846380 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.846389 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.846399 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mmcd\" (UniqueName: \"kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.007695 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.018931 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data" (OuterVolumeSpecName: "config-data") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.053273 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.053404 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.148073 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.389094 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerDied","Data":"b70abbe701b5afa37deb9280d8bab4f32e4ab209764879ee00b0808064143809"} Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.389123 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.389170 5136 scope.go:117] "RemoveContainer" containerID="306693c3fd00ea7d6ce01b03a7c8a7af984cbfe8170200b7df46fd72de431115" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.411225 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.411252 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" event={"ID":"200895ec-fcf9-436d-82d3-c26c198e1485","Type":"ContainerStarted","Data":"01aa356b57e965220f79e7a24da86937ea014054be6bb673baf18c8bb2471582"} Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.411964 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerStarted","Data":"2cfb10b9d836c0f827d7cbf404deea18e90ee9f1642613e1791248f726bfa45a"} Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.429575 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" podStartSLOduration=3.429557642 podStartE2EDuration="3.429557642s" podCreationTimestamp="2026-03-20 07:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:50.420930863 +0000 UTC m=+1402.680242034" watchObservedRunningTime="2026-03-20 07:12:50.429557642 +0000 UTC m=+1402.688868793" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.431522 5136 scope.go:117] "RemoveContainer" containerID="445411040c2cad21dcc73efad8a46c4699ac626429795c9aa6391147eec609d7" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.446613 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.463890 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.499879 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:12:50 crc kubenswrapper[5136]: E0320 07:12:50.500318 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="sg-core" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500335 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="sg-core" Mar 20 07:12:50 crc kubenswrapper[5136]: E0320 07:12:50.500342 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-central-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500349 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-central-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: E0320 07:12:50.500367 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-notification-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500373 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-notification-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: E0320 07:12:50.500381 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="proxy-httpd" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500390 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="proxy-httpd" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500542 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="sg-core" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500557 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="proxy-httpd" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500573 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-notification-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500586 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-central-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.502381 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.504977 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.505263 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.508979 5136 scope.go:117] "RemoveContainer" containerID="cc654c94c6c668e03f2204d6c1f1eaaff73ba2c53ec4645dd69d881bd133a570" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.511970 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.545938 5136 scope.go:117] "RemoveContainer" containerID="7336863799fdb9fab29de80cd8bd1d394cbcbcf4fd209c912a6795c7665f330a" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565209 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565317 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565350 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565405 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565442 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565535 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlljw\" (UniqueName: \"kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565570 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667119 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlljw\" (UniqueName: \"kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667184 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667227 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667288 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667320 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667366 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667396 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.668076 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.672487 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.673110 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.677511 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.685038 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.685939 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.690539 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlljw\" (UniqueName: \"kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.822466 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.302479 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.353213 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.364467 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.421879 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerStarted","Data":"b81f7d930b2cc73362e82a5dc550922c5559167f66d5246607f3e9e7db350a2e"} Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.430221 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerStarted","Data":"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95"} Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.430390 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api-log" containerID="cri-o://ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" gracePeriod=30 Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.430752 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.430868 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api" containerID="cri-o://27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" gracePeriod=30 Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.433861 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerStarted","Data":"ffa0508cf624ffc95ed314dc6a07f83a9f8e8ec653c0427813fff7d7bbe42409"} Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.437141 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.437352 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d86fb98dd-76pm8" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api-log" containerID="cri-o://e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6" gracePeriod=30 Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.437496 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d86fb98dd-76pm8" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api" containerID="cri-o://62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf" gracePeriod=30 Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.460671 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.870813025 podStartE2EDuration="4.460650078s" podCreationTimestamp="2026-03-20 07:12:47 +0000 UTC" firstStartedPulling="2026-03-20 07:12:48.286001184 +0000 UTC m=+1400.545312365" lastFinishedPulling="2026-03-20 07:12:48.875838267 +0000 UTC m=+1401.135149418" observedRunningTime="2026-03-20 07:12:51.454421653 +0000 UTC m=+1403.713732824" watchObservedRunningTime="2026-03-20 07:12:51.460650078 +0000 UTC m=+1403.719961219" Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.481176 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.481160946 podStartE2EDuration="4.481160946s" podCreationTimestamp="2026-03-20 07:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:51.475294943 +0000 UTC m=+1403.734606094" watchObservedRunningTime="2026-03-20 07:12:51.481160946 +0000 UTC m=+1403.740472097" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.155057 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.247199 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.400501 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.400894 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.400953 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.400979 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.401006 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.401046 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.401085 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz48n\" (UniqueName: \"kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.405924 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.407622 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.407961 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs" (OuterVolumeSpecName: "logs") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.408225 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts" (OuterVolumeSpecName: "scripts") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.412982 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" path="/var/lib/kubelet/pods/ff72278d-b5e7-427b-8581-52ff89c57176/volumes" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.436052 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n" (OuterVolumeSpecName: "kube-api-access-pz48n") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "kube-api-access-pz48n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.441060 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.459531 5136 generic.go:334] "Generic (PLEG): container finished" podID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerID="e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6" exitCode=143 Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.459632 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerDied","Data":"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6"} Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.463249 5136 generic.go:334] "Generic (PLEG): container finished" podID="1f323747-95a7-4199-b250-bb5591a1c182" containerID="27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" exitCode=0 Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.463289 5136 generic.go:334] "Generic (PLEG): container finished" podID="1f323747-95a7-4199-b250-bb5591a1c182" containerID="ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" exitCode=143 Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.463440 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.464267 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerDied","Data":"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95"} Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.464299 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerDied","Data":"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a"} Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.464313 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerDied","Data":"91ca9f6487866d4c1ccc97b9b95931b4963d53b6c7bb374292c03d89eecac13f"} Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.464331 5136 scope.go:117] "RemoveContainer" containerID="27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.470412 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerStarted","Data":"da66db3f467f27617e353e5b7cb9122c8ae01fca01e9365d62e83fc76b60055a"} Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.470903 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data" (OuterVolumeSpecName: "config-data") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.487770 5136 scope.go:117] "RemoveContainer" containerID="ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503541 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503852 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503862 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503871 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503880 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503888 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503896 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz48n\" (UniqueName: \"kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.522379 5136 scope.go:117] "RemoveContainer" containerID="27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" Mar 20 07:12:52 crc kubenswrapper[5136]: E0320 07:12:52.523203 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95\": container with ID starting with 27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95 not found: ID does not exist" containerID="27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.523290 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95"} err="failed to get container status \"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95\": rpc error: code = NotFound desc = could not find container \"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95\": container with ID starting with 27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95 not found: ID does not exist" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.523630 5136 scope.go:117] "RemoveContainer" containerID="ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" Mar 20 07:12:52 crc kubenswrapper[5136]: E0320 07:12:52.524182 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a\": container with ID starting with ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a not found: ID does not exist" containerID="ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.524287 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a"} err="failed to get container status \"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a\": rpc error: code = NotFound desc = could not find container \"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a\": container with ID starting with ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a not found: ID does not exist" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.524374 5136 scope.go:117] "RemoveContainer" containerID="27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.524747 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95"} err="failed to get container status \"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95\": rpc error: code = NotFound desc = could not find container \"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95\": container with ID starting with 27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95 not found: ID does not exist" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.524857 5136 scope.go:117] "RemoveContainer" containerID="ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.525080 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a"} err="failed to get container status \"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a\": rpc error: code = NotFound desc = could not find container \"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a\": container with ID starting with ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a not found: ID does not exist" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.692991 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.807139 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.818576 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.832526 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:52 crc kubenswrapper[5136]: E0320 07:12:52.832867 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api-log" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.832883 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api-log" Mar 20 07:12:52 crc kubenswrapper[5136]: E0320 07:12:52.832901 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.832907 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.833065 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api-log" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.833090 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.833940 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.836951 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.838773 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.839061 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.903010 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910214 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910321 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910456 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910570 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910704 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910784 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910839 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scwjn\" (UniqueName: \"kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910858 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910874 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013050 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013448 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013519 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013598 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013663 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013815 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scwjn\" (UniqueName: \"kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013855 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013873 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013878 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.014029 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.014115 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.019101 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.019469 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.019844 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.019847 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.020282 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.021583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.033580 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scwjn\" (UniqueName: \"kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.157640 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.485881 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerStarted","Data":"9103ade7cd09df25f48e36378a2bb9597e36c48024e753bdbfbdd5238e038dae"} Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.621163 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:53 crc kubenswrapper[5136]: W0320 07:12:53.625274 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76d08c01_d488_4f36_9998_7f074633c7c5.slice/crio-aab757c2dc94939d0797a6c8423da847cee9bc95e622fe61ec74ea5928df97cd WatchSource:0}: Error finding container aab757c2dc94939d0797a6c8423da847cee9bc95e622fe61ec74ea5928df97cd: Status 404 returned error can't find the container with id aab757c2dc94939d0797a6c8423da847cee9bc95e622fe61ec74ea5928df97cd Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.411538 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f323747-95a7-4199-b250-bb5591a1c182" path="/var/lib/kubelet/pods/1f323747-95a7-4199-b250-bb5591a1c182/volumes" Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.507200 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerStarted","Data":"d634927012184728bf43eb92652caca4a427eb3140fa0f169ebc49e4b1103bd6"} Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.509205 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerStarted","Data":"c15b277e6d0d090e0e5755609decc556ab1c1f03a878f14749a60fdfeeec941e"} Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.509246 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerStarted","Data":"aab757c2dc94939d0797a6c8423da847cee9bc95e622fe61ec74ea5928df97cd"} Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.610052 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d86fb98dd-76pm8" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:35134->10.217.0.163:9311: read: connection reset by peer" Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.610330 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d86fb98dd-76pm8" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:35130->10.217.0.163:9311: read: connection reset by peer" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.048116 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.249716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom\") pod \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.249760 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle\") pod \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.249893 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs\") pod \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.249980 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx2gt\" (UniqueName: \"kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt\") pod \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.250028 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data\") pod \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.250470 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs" (OuterVolumeSpecName: "logs") pod "8a79c65a-77e4-492c-bb32-5c562da1fe4c" (UID: "8a79c65a-77e4-492c-bb32-5c562da1fe4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.256027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt" (OuterVolumeSpecName: "kube-api-access-tx2gt") pod "8a79c65a-77e4-492c-bb32-5c562da1fe4c" (UID: "8a79c65a-77e4-492c-bb32-5c562da1fe4c"). InnerVolumeSpecName "kube-api-access-tx2gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.258910 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8a79c65a-77e4-492c-bb32-5c562da1fe4c" (UID: "8a79c65a-77e4-492c-bb32-5c562da1fe4c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.281177 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a79c65a-77e4-492c-bb32-5c562da1fe4c" (UID: "8a79c65a-77e4-492c-bb32-5c562da1fe4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.303211 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data" (OuterVolumeSpecName: "config-data") pod "8a79c65a-77e4-492c-bb32-5c562da1fe4c" (UID: "8a79c65a-77e4-492c-bb32-5c562da1fe4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.352223 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.352502 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx2gt\" (UniqueName: \"kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.352732 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.352950 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.353084 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.523946 5136 generic.go:334] "Generic (PLEG): container finished" podID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerID="62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf" exitCode=0 Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.524061 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.524657 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerDied","Data":"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf"} Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.524778 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerDied","Data":"906ac955356980032ab967bfee58aa1175878c2109f34d9bbc6cfcc9e1a56ade"} Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.524852 5136 scope.go:117] "RemoveContainer" containerID="62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.530067 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerStarted","Data":"f3795d92724d61612c4e998a075d7ebdd89fee122f5c02cbcebdad3f46cd4b7c"} Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.530190 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.558878 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.5588546450000003 podStartE2EDuration="3.558854645s" podCreationTimestamp="2026-03-20 07:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:55.551436794 +0000 UTC m=+1407.810747935" watchObservedRunningTime="2026-03-20 07:12:55.558854645 +0000 UTC m=+1407.818165816" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.575202 5136 scope.go:117] "RemoveContainer" containerID="e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.580641 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.597553 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.600564 5136 scope.go:117] "RemoveContainer" containerID="62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf" Mar 20 07:12:55 crc kubenswrapper[5136]: E0320 07:12:55.601493 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf\": container with ID starting with 62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf not found: ID does not exist" containerID="62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.601536 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf"} err="failed to get container status \"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf\": rpc error: code = NotFound desc = could not find container \"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf\": container with ID starting with 62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf not found: ID does not exist" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.601559 5136 scope.go:117] "RemoveContainer" containerID="e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6" Mar 20 07:12:55 crc kubenswrapper[5136]: E0320 07:12:55.602150 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6\": container with ID starting with e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6 not found: ID does not exist" containerID="e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.602179 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6"} err="failed to get container status \"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6\": rpc error: code = NotFound desc = could not find container \"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6\": container with ID starting with e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6 not found: ID does not exist" Mar 20 07:12:56 crc kubenswrapper[5136]: I0320 07:12:56.418192 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" path="/var/lib/kubelet/pods/8a79c65a-77e4-492c-bb32-5c562da1fe4c/volumes" Mar 20 07:12:56 crc kubenswrapper[5136]: I0320 07:12:56.544207 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerStarted","Data":"11805be8e1ecba0ecec0d4173e4555e07b6e94d856d2b60561b740f2c3a233f1"} Mar 20 07:12:56 crc kubenswrapper[5136]: I0320 07:12:56.580388 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.291974957 podStartE2EDuration="6.58033075s" podCreationTimestamp="2026-03-20 07:12:50 +0000 UTC" firstStartedPulling="2026-03-20 07:12:51.358550735 +0000 UTC m=+1403.617861876" lastFinishedPulling="2026-03-20 07:12:55.646906508 +0000 UTC m=+1407.906217669" observedRunningTime="2026-03-20 07:12:56.573620301 +0000 UTC m=+1408.832931492" watchObservedRunningTime="2026-03-20 07:12:56.58033075 +0000 UTC m=+1408.839641911" Mar 20 07:12:57 crc kubenswrapper[5136]: I0320 07:12:57.555243 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:12:57 crc kubenswrapper[5136]: I0320 07:12:57.706114 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:57 crc kubenswrapper[5136]: I0320 07:12:57.810808 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:57 crc kubenswrapper[5136]: I0320 07:12:57.814738 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="dnsmasq-dns" containerID="cri-o://dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd" gracePeriod=10 Mar 20 07:12:57 crc kubenswrapper[5136]: I0320 07:12:57.979340 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.041395 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.355787 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418502 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418627 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418869 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62m5b\" (UniqueName: \"kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418937 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418972 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.474185 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b" (OuterVolumeSpecName: "kube-api-access-62m5b") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "kube-api-access-62m5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.529641 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62m5b\" (UniqueName: \"kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.534495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.550085 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config" (OuterVolumeSpecName: "config") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.587959 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.594381 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.594724 5136 generic.go:334] "Generic (PLEG): container finished" podID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerID="dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd" exitCode=0 Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595000 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="cinder-scheduler" containerID="cri-o://2cfb10b9d836c0f827d7cbf404deea18e90ee9f1642613e1791248f726bfa45a" gracePeriod=30 Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595375 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" event={"ID":"3e6c911d-6da1-440a-8d63-d61e68b0272c","Type":"ContainerDied","Data":"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd"} Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595737 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" event={"ID":"3e6c911d-6da1-440a-8d63-d61e68b0272c","Type":"ContainerDied","Data":"f5741bc42c68687793c23bf8ad98b077172ee8bf2f31364fb5498b69a4e6d1bb"} Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595752 5136 scope.go:117] "RemoveContainer" containerID="dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595800 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="probe" containerID="cri-o://b81f7d930b2cc73362e82a5dc550922c5559167f66d5246607f3e9e7db350a2e" gracePeriod=30 Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.622338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.632206 5136 scope.go:117] "RemoveContainer" containerID="e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.633140 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.633172 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.633184 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.633193 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.633201 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.652722 5136 scope.go:117] "RemoveContainer" containerID="dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd" Mar 20 07:12:58 crc kubenswrapper[5136]: E0320 07:12:58.653166 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd\": container with ID starting with dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd not found: ID does not exist" containerID="dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.653196 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd"} err="failed to get container status \"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd\": rpc error: code = NotFound desc = could not find container \"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd\": container with ID starting with dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd not found: ID does not exist" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.653216 5136 scope.go:117] "RemoveContainer" containerID="e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e" Mar 20 07:12:58 crc kubenswrapper[5136]: E0320 07:12:58.653465 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e\": container with ID starting with e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e not found: ID does not exist" containerID="e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.653486 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e"} err="failed to get container status \"e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e\": rpc error: code = NotFound desc = could not find container \"e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e\": container with ID starting with e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e not found: ID does not exist" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.924482 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.931698 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:59 crc kubenswrapper[5136]: I0320 07:12:59.924307 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:59 crc kubenswrapper[5136]: I0320 07:12:59.956050 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:13:00 crc kubenswrapper[5136]: I0320 07:13:00.405934 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" path="/var/lib/kubelet/pods/3e6c911d-6da1-440a-8d63-d61e68b0272c/volumes" Mar 20 07:13:00 crc kubenswrapper[5136]: I0320 07:13:00.620689 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:13:00 crc kubenswrapper[5136]: I0320 07:13:00.627474 5136 generic.go:334] "Generic (PLEG): container finished" podID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerID="b81f7d930b2cc73362e82a5dc550922c5559167f66d5246607f3e9e7db350a2e" exitCode=0 Mar 20 07:13:00 crc kubenswrapper[5136]: I0320 07:13:00.627709 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerDied","Data":"b81f7d930b2cc73362e82a5dc550922c5559167f66d5246607f3e9e7db350a2e"} Mar 20 07:13:01 crc kubenswrapper[5136]: I0320 07:13:01.141441 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:13:01 crc kubenswrapper[5136]: I0320 07:13:01.640898 5136 generic.go:334] "Generic (PLEG): container finished" podID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerID="2cfb10b9d836c0f827d7cbf404deea18e90ee9f1642613e1791248f726bfa45a" exitCode=0 Mar 20 07:13:01 crc kubenswrapper[5136]: I0320 07:13:01.641084 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerDied","Data":"2cfb10b9d836c0f827d7cbf404deea18e90ee9f1642613e1791248f726bfa45a"} Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.011250 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.101940 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.102036 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.102061 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzhh2\" (UniqueName: \"kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.102090 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.102154 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.102230 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.115701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2" (OuterVolumeSpecName: "kube-api-access-nzhh2") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "kube-api-access-nzhh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.115785 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.132020 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.178983 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts" (OuterVolumeSpecName: "scripts") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.204600 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.204917 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.204927 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.204935 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzhh2\" (UniqueName: \"kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.223229 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.288586 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data" (OuterVolumeSpecName: "config-data") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.306695 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.306729 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.651866 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.651859 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerDied","Data":"044038db3dbd29add28a6376877ab76e790fddb6ebde5915cf5bb01e87957d6b"} Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.652478 5136 scope.go:117] "RemoveContainer" containerID="b81f7d930b2cc73362e82a5dc550922c5559167f66d5246607f3e9e7db350a2e" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.694463 5136 scope.go:117] "RemoveContainer" containerID="2cfb10b9d836c0f827d7cbf404deea18e90ee9f1642613e1791248f726bfa45a" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.700865 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.715588 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.732412 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.732730 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="dnsmasq-dns" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.732747 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="dnsmasq-dns" Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.732767 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="cinder-scheduler" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.732774 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="cinder-scheduler" Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.732782 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api-log" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.732788 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api-log" Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.732797 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="probe" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.732802 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="probe" Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.732814 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733345 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api" Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.733375 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="init" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733385 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="init" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733566 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733585 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="dnsmasq-dns" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733599 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="cinder-scheduler" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733622 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="probe" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733642 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api-log" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.734651 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.738130 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.748237 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818312 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818393 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818429 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818773 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-482rz\" (UniqueName: \"kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818947 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818998 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920591 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920637 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920730 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-482rz\" (UniqueName: \"kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920738 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920875 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920931 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.921024 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.925741 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.926018 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.930758 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.932008 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.939800 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-482rz\" (UniqueName: \"kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.048569 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.118123 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.187217 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.187412 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-564b95fd68-m2j52" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-api" containerID="cri-o://df57270fa341245294b8409621c8d255f8f17bac5716eb17c148b55857569799" gracePeriod=30 Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.187752 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-564b95fd68-m2j52" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-httpd" containerID="cri-o://024fcc0cd809e83faec73e5ba56c99a83ab40c7bc4ad09d07aea8ace13ee29fb" gracePeriod=30 Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.595204 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:13:03 crc kubenswrapper[5136]: W0320 07:13:03.597164 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31adef78_59fe_4327_9586_0c12177c7bb7.slice/crio-68a0376e88b4b3da7cb1aed58c92f9e17081913aac9827be120b7a59b01a2ab0 WatchSource:0}: Error finding container 68a0376e88b4b3da7cb1aed58c92f9e17081913aac9827be120b7a59b01a2ab0: Status 404 returned error can't find the container with id 68a0376e88b4b3da7cb1aed58c92f9e17081913aac9827be120b7a59b01a2ab0 Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.635570 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.677621 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerStarted","Data":"68a0376e88b4b3da7cb1aed58c92f9e17081913aac9827be120b7a59b01a2ab0"} Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.688542 5136 generic.go:334] "Generic (PLEG): container finished" podID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerID="024fcc0cd809e83faec73e5ba56c99a83ab40c7bc4ad09d07aea8ace13ee29fb" exitCode=0 Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.688611 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerDied","Data":"024fcc0cd809e83faec73e5ba56c99a83ab40c7bc4ad09d07aea8ace13ee29fb"} Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.949755 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.003958 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.004237 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-f464f8686-f4nfl" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-log" containerID="cri-o://d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd" gracePeriod=30 Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.004649 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-f464f8686-f4nfl" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-api" containerID="cri-o://ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95" gracePeriod=30 Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.411797 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" path="/var/lib/kubelet/pods/0d918240-f8fb-459f-a116-7fce9c0068a8/volumes" Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.746375 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerStarted","Data":"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c"} Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.757563 5136 generic.go:334] "Generic (PLEG): container finished" podID="98f17780-5e89-47b5-a280-ff05d993aec1" containerID="d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd" exitCode=143 Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.757608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerDied","Data":"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd"} Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.233258 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.234704 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.236558 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.239407 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.239465 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-z5th8" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.246525 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.371544 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.371667 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.371730 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.371747 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzch\" (UniqueName: \"kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.473774 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.474633 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.474799 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.474932 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdzch\" (UniqueName: \"kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.475074 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.483377 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.489318 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.491955 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdzch\" (UniqueName: \"kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.552346 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.757535 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.773611 5136 generic.go:334] "Generic (PLEG): container finished" podID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerID="df57270fa341245294b8409621c8d255f8f17bac5716eb17c148b55857569799" exitCode=0 Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.773691 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerDied","Data":"df57270fa341245294b8409621c8d255f8f17bac5716eb17c148b55857569799"} Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.776486 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerStarted","Data":"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb"} Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.819093 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.81907462 podStartE2EDuration="3.81907462s" podCreationTimestamp="2026-03-20 07:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:05.810027588 +0000 UTC m=+1418.069338739" watchObservedRunningTime="2026-03-20 07:13:05.81907462 +0000 UTC m=+1418.078385771" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.027885 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 07:13:06 crc kubenswrapper[5136]: W0320 07:13:06.030016 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17ad787b_18bc_4afd_840b_2458b494094a.slice/crio-c030f875e5ff027f4f05e9d519ba652a96422a4a9f30e0ce0b1699cab2e278b4 WatchSource:0}: Error finding container c030f875e5ff027f4f05e9d519ba652a96422a4a9f30e0ce0b1699cab2e278b4: Status 404 returned error can't find the container with id c030f875e5ff027f4f05e9d519ba652a96422a4a9f30e0ce0b1699cab2e278b4 Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.197531 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.293507 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45w7v\" (UniqueName: \"kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v\") pod \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.293556 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle\") pod \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.293659 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config\") pod \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.293718 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config\") pod \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.293772 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs\") pod \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.299387 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2ae7d29f-d050-4d87-b59e-1237f7f6d48a" (UID: "2ae7d29f-d050-4d87-b59e-1237f7f6d48a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.305027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v" (OuterVolumeSpecName: "kube-api-access-45w7v") pod "2ae7d29f-d050-4d87-b59e-1237f7f6d48a" (UID: "2ae7d29f-d050-4d87-b59e-1237f7f6d48a"). InnerVolumeSpecName "kube-api-access-45w7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.388966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config" (OuterVolumeSpecName: "config") pod "2ae7d29f-d050-4d87-b59e-1237f7f6d48a" (UID: "2ae7d29f-d050-4d87-b59e-1237f7f6d48a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.396483 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.396515 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45w7v\" (UniqueName: \"kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.396525 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.404878 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ae7d29f-d050-4d87-b59e-1237f7f6d48a" (UID: "2ae7d29f-d050-4d87-b59e-1237f7f6d48a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.439764 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2ae7d29f-d050-4d87-b59e-1237f7f6d48a" (UID: "2ae7d29f-d050-4d87-b59e-1237f7f6d48a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.498235 5136 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.498271 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.785475 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"17ad787b-18bc-4afd-840b-2458b494094a","Type":"ContainerStarted","Data":"c030f875e5ff027f4f05e9d519ba652a96422a4a9f30e0ce0b1699cab2e278b4"} Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.788084 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerDied","Data":"182398618c3a1531c9ad080ffbd4a768caa0163ecc71bb0c12e322558e13d0bb"} Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.788125 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.788146 5136 scope.go:117] "RemoveContainer" containerID="024fcc0cd809e83faec73e5ba56c99a83ab40c7bc4ad09d07aea8ace13ee29fb" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.818479 5136 scope.go:117] "RemoveContainer" containerID="df57270fa341245294b8409621c8d255f8f17bac5716eb17c148b55857569799" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.818672 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.828407 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.615744 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720629 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720686 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720761 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720866 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720919 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720939 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv6wk\" (UniqueName: \"kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720954 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.721082 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs" (OuterVolumeSpecName: "logs") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.721726 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.727397 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk" (OuterVolumeSpecName: "kube-api-access-mv6wk") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "kube-api-access-mv6wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.727904 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts" (OuterVolumeSpecName: "scripts") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.780324 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.783677 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data" (OuterVolumeSpecName: "config-data") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.803709 5136 generic.go:334] "Generic (PLEG): container finished" podID="98f17780-5e89-47b5-a280-ff05d993aec1" containerID="ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95" exitCode=0 Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.803849 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.803833 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerDied","Data":"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95"} Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.804013 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerDied","Data":"5651bf9b6915e66b83ba1121283006e83d01461bec1d3f8053fa31cecb4a7017"} Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.804040 5136 scope.go:117] "RemoveContainer" containerID="ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.825193 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.825224 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.825234 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv6wk\" (UniqueName: \"kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.825263 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.832485 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.833703 5136 scope.go:117] "RemoveContainer" containerID="d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.834283 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.858445 5136 scope.go:117] "RemoveContainer" containerID="ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95" Mar 20 07:13:07 crc kubenswrapper[5136]: E0320 07:13:07.861048 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95\": container with ID starting with ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95 not found: ID does not exist" containerID="ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.861103 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95"} err="failed to get container status \"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95\": rpc error: code = NotFound desc = could not find container \"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95\": container with ID starting with ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95 not found: ID does not exist" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.861126 5136 scope.go:117] "RemoveContainer" containerID="d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd" Mar 20 07:13:07 crc kubenswrapper[5136]: E0320 07:13:07.861516 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd\": container with ID starting with d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd not found: ID does not exist" containerID="d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.861566 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd"} err="failed to get container status \"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd\": rpc error: code = NotFound desc = could not find container \"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd\": container with ID starting with d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd not found: ID does not exist" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.926377 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.926406 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.050082 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.136332 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.145436 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.336481 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:13:08 crc kubenswrapper[5136]: E0320 07:13:08.337501 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-httpd" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.337586 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-httpd" Mar 20 07:13:08 crc kubenswrapper[5136]: E0320 07:13:08.337678 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-api" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.337738 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-api" Mar 20 07:13:08 crc kubenswrapper[5136]: E0320 07:13:08.337795 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-api" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.337869 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-api" Mar 20 07:13:08 crc kubenswrapper[5136]: E0320 07:13:08.337933 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-log" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.337981 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-log" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.338186 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-api" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.338252 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-api" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.338314 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-httpd" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.338371 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-log" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.339300 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.344083 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.344360 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.344363 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.377289 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.424171 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" path="/var/lib/kubelet/pods/2ae7d29f-d050-4d87-b59e-1237f7f6d48a/volumes" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.425029 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" path="/var/lib/kubelet/pods/98f17780-5e89-47b5-a280-ff05d993aec1/volumes" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434028 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpskn\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434120 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434158 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434192 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434210 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434231 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434634 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434901 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536122 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpskn\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536212 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536242 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536269 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536284 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536303 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536329 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536357 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.537470 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.540651 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.545640 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.547927 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.549685 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.551461 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.561654 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.577030 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpskn\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.656921 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.234374 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.829907 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerStarted","Data":"793f74839b7ab06cf4c132cbd574d3bb47712ec9caa473ebbd084643fcce6f31"} Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.830219 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerStarted","Data":"0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac"} Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.830232 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerStarted","Data":"1d45fa03e9e760b3fecb6f7927ee88ef303052eb3da3a45f5cb31589469d2afb"} Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.830266 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.830283 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.852757 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-744d6f84fc-bqcsc" podStartSLOduration=1.852742616 podStartE2EDuration="1.852742616s" podCreationTimestamp="2026-03-20 07:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:09.852003702 +0000 UTC m=+1422.111314873" watchObservedRunningTime="2026-03-20 07:13:09.852742616 +0000 UTC m=+1422.112053767" Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.011055 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.011359 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-central-agent" containerID="cri-o://da66db3f467f27617e353e5b7cb9122c8ae01fca01e9365d62e83fc76b60055a" gracePeriod=30 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.012006 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="sg-core" containerID="cri-o://d634927012184728bf43eb92652caca4a427eb3140fa0f169ebc49e4b1103bd6" gracePeriod=30 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.012089 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-notification-agent" containerID="cri-o://9103ade7cd09df25f48e36378a2bb9597e36c48024e753bdbfbdd5238e038dae" gracePeriod=30 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.012134 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="proxy-httpd" containerID="cri-o://11805be8e1ecba0ecec0d4173e4555e07b6e94d856d2b60561b740f2c3a233f1" gracePeriod=30 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.119900 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": read tcp 10.217.0.2:51860->10.217.0.168:3000: read: connection reset by peer" Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.846947 5136 generic.go:334] "Generic (PLEG): container finished" podID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerID="11805be8e1ecba0ecec0d4173e4555e07b6e94d856d2b60561b740f2c3a233f1" exitCode=0 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.846991 5136 generic.go:334] "Generic (PLEG): container finished" podID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerID="d634927012184728bf43eb92652caca4a427eb3140fa0f169ebc49e4b1103bd6" exitCode=2 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847001 5136 generic.go:334] "Generic (PLEG): container finished" podID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerID="9103ade7cd09df25f48e36378a2bb9597e36c48024e753bdbfbdd5238e038dae" exitCode=0 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847008 5136 generic.go:334] "Generic (PLEG): container finished" podID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerID="da66db3f467f27617e353e5b7cb9122c8ae01fca01e9365d62e83fc76b60055a" exitCode=0 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847370 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerDied","Data":"11805be8e1ecba0ecec0d4173e4555e07b6e94d856d2b60561b740f2c3a233f1"} Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847731 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerDied","Data":"d634927012184728bf43eb92652caca4a427eb3140fa0f169ebc49e4b1103bd6"} Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847750 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerDied","Data":"9103ade7cd09df25f48e36378a2bb9597e36c48024e753bdbfbdd5238e038dae"} Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847762 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerDied","Data":"da66db3f467f27617e353e5b7cb9122c8ae01fca01e9365d62e83fc76b60055a"} Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.928725 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005342 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005400 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlljw\" (UniqueName: \"kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005481 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005563 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005634 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005649 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005683 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.018333 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts" (OuterVolumeSpecName: "scripts") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.018494 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.021205 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.029278 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw" (OuterVolumeSpecName: "kube-api-access-dlljw") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "kube-api-access-dlljw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.100187 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.107926 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.107993 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlljw\" (UniqueName: \"kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.108005 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.108012 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.108022 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.149958 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data" (OuterVolumeSpecName: "config-data") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.193966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.209563 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.209603 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.859026 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerDied","Data":"ffa0508cf624ffc95ed314dc6a07f83a9f8e8ec653c0427813fff7d7bbe42409"} Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.859378 5136 scope.go:117] "RemoveContainer" containerID="11805be8e1ecba0ecec0d4173e4555e07b6e94d856d2b60561b740f2c3a233f1" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.859118 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.903662 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.916189 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.926578 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:11 crc kubenswrapper[5136]: E0320 07:13:11.927028 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="proxy-httpd" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927046 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="proxy-httpd" Mar 20 07:13:11 crc kubenswrapper[5136]: E0320 07:13:11.927055 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-central-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927062 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-central-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: E0320 07:13:11.927081 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="sg-core" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927087 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="sg-core" Mar 20 07:13:11 crc kubenswrapper[5136]: E0320 07:13:11.927103 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-notification-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927109 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-notification-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927264 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-central-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927279 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="sg-core" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927301 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="proxy-httpd" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927309 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-notification-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.929478 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.933308 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.934586 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.938622 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031655 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031727 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031768 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031832 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031875 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ppp\" (UniqueName: \"kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031892 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031921 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.133862 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.133936 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.133977 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134092 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134133 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5ppp\" (UniqueName: \"kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134153 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134699 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134761 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.139431 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.139849 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.141423 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.148447 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.150767 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5ppp\" (UniqueName: \"kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.274804 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.407290 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" path="/var/lib/kubelet/pods/96e356ef-4c69-41e5-b9ae-14c7faadf1b2/volumes" Mar 20 07:13:13 crc kubenswrapper[5136]: I0320 07:13:13.276266 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 20 07:13:15 crc kubenswrapper[5136]: I0320 07:13:15.499251 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.165588 5136 scope.go:117] "RemoveContainer" containerID="d634927012184728bf43eb92652caca4a427eb3140fa0f169ebc49e4b1103bd6" Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.287773 5136 scope.go:117] "RemoveContainer" containerID="9103ade7cd09df25f48e36378a2bb9597e36c48024e753bdbfbdd5238e038dae" Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.415938 5136 scope.go:117] "RemoveContainer" containerID="da66db3f467f27617e353e5b7cb9122c8ae01fca01e9365d62e83fc76b60055a" Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.675319 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.912408 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerStarted","Data":"b1ecb08e86fb97ca8048b41dfca62020c1e1817555ee06471950360149ce18be"} Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.914099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"17ad787b-18bc-4afd-840b-2458b494094a","Type":"ContainerStarted","Data":"ef5361e0b73e9c41cc23b5ebe9348fce6a363e59e0bc84a305ad44756dd780af"} Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.943533 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.7326064030000001 podStartE2EDuration="11.943514332s" podCreationTimestamp="2026-03-20 07:13:05 +0000 UTC" firstStartedPulling="2026-03-20 07:13:06.031735558 +0000 UTC m=+1418.291046709" lastFinishedPulling="2026-03-20 07:13:16.242643487 +0000 UTC m=+1428.501954638" observedRunningTime="2026-03-20 07:13:16.934935645 +0000 UTC m=+1429.194246816" watchObservedRunningTime="2026-03-20 07:13:16.943514332 +0000 UTC m=+1429.202825473" Mar 20 07:13:17 crc kubenswrapper[5136]: I0320 07:13:17.924266 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerStarted","Data":"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5"} Mar 20 07:13:18 crc kubenswrapper[5136]: I0320 07:13:18.661850 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:18 crc kubenswrapper[5136]: I0320 07:13:18.662186 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:18 crc kubenswrapper[5136]: I0320 07:13:18.935579 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerStarted","Data":"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49"} Mar 20 07:13:18 crc kubenswrapper[5136]: I0320 07:13:18.935969 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerStarted","Data":"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb"} Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.007332 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2sj8m"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.008722 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.017977 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2sj8m"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.080137 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t25k2\" (UniqueName: \"kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.080247 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.112562 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4jdnj"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.113666 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.123675 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e3bd-account-create-update-zlrc6"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.124857 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.127690 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.141206 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-zlrc6"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.187526 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.187630 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.187715 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t25k2\" (UniqueName: \"kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.187744 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvtrm\" (UniqueName: \"kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.188690 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.199138 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4jdnj"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.228315 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t25k2\" (UniqueName: \"kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.289672 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.289797 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.289862 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xcq\" (UniqueName: \"kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.289892 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvtrm\" (UniqueName: \"kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.290713 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.325639 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.330170 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xpg98"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.331510 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.335514 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvtrm\" (UniqueName: \"kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.364243 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-lv952"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.365413 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.367418 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.406704 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5xcq\" (UniqueName: \"kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.406830 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.406952 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.407031 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm9sg\" (UniqueName: \"kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.409545 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.433923 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.435584 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5xcq\" (UniqueName: \"kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.439906 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xpg98"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.439938 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-lv952"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.445180 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.508879 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.509300 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wlx\" (UniqueName: \"kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.509325 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm9sg\" (UniqueName: \"kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.509367 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.510725 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.528069 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0423-account-create-update-ntmkb"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.529351 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm9sg\" (UniqueName: \"kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.530570 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.533733 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.548255 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-ntmkb"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.611046 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.611162 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.611233 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wlx\" (UniqueName: \"kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.611274 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk2sf\" (UniqueName: \"kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.611871 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.640407 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wlx\" (UniqueName: \"kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.712704 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk2sf\" (UniqueName: \"kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.713103 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.713779 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.735502 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk2sf\" (UniqueName: \"kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.809064 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.821879 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.853378 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.887408 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2sj8m"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.955688 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerStarted","Data":"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11"} Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.955769 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-central-agent" containerID="cri-o://5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5" gracePeriod=30 Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.955830 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="proxy-httpd" containerID="cri-o://34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11" gracePeriod=30 Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.955900 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="sg-core" containerID="cri-o://ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49" gracePeriod=30 Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.955946 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-notification-agent" containerID="cri-o://b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb" gracePeriod=30 Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.956258 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.964793 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2sj8m" event={"ID":"bfbcdb71-4e43-4243-a408-08d69b6d7328","Type":"ContainerStarted","Data":"df133601606c7da3d6d3058e70578121e914114c2561a58b46f39470ce91218f"} Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.989689 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.05689405 podStartE2EDuration="9.989674169s" podCreationTimestamp="2026-03-20 07:13:11 +0000 UTC" firstStartedPulling="2026-03-20 07:13:16.680950891 +0000 UTC m=+1428.940262052" lastFinishedPulling="2026-03-20 07:13:20.61373102 +0000 UTC m=+1432.873042171" observedRunningTime="2026-03-20 07:13:20.977252942 +0000 UTC m=+1433.236564093" watchObservedRunningTime="2026-03-20 07:13:20.989674169 +0000 UTC m=+1433.248985320" Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.042637 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4jdnj"] Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.128997 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-zlrc6"] Mar 20 07:13:21 crc kubenswrapper[5136]: W0320 07:13:21.144294 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf91601d4_11a0_4327_8f7e_6856df2b4643.slice/crio-af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423 WatchSource:0}: Error finding container af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423: Status 404 returned error can't find the container with id af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.333595 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xpg98"] Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.421590 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-lv952"] Mar 20 07:13:21 crc kubenswrapper[5136]: W0320 07:13:21.426107 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a1492b7_73df_440c_9246_ae0e3c2e8802.slice/crio-0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e WatchSource:0}: Error finding container 0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e: Status 404 returned error can't find the container with id 0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.500134 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-ntmkb"] Mar 20 07:13:21 crc kubenswrapper[5136]: W0320 07:13:21.633663 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fd262d5_bfc7_49ae_908e_709fa9d0f55f.slice/crio-d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c WatchSource:0}: Error finding container d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c: Status 404 returned error can't find the container with id d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.975000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-lv952" event={"ID":"2a1492b7-73df-440c-9246-ae0e3c2e8802","Type":"ContainerStarted","Data":"76ecc1efaf0109117b2021b8dc8f89423ec738c34b9f16b5ea8ada8e167cdf99"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.975509 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-lv952" event={"ID":"2a1492b7-73df-440c-9246-ae0e3c2e8802","Type":"ContainerStarted","Data":"0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978682 5136 generic.go:334] "Generic (PLEG): container finished" podID="970a5b8d-94f8-4638-b351-40867c27568a" containerID="34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11" exitCode=0 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978713 5136 generic.go:334] "Generic (PLEG): container finished" podID="970a5b8d-94f8-4638-b351-40867c27568a" containerID="ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49" exitCode=2 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978723 5136 generic.go:334] "Generic (PLEG): container finished" podID="970a5b8d-94f8-4638-b351-40867c27568a" containerID="b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb" exitCode=0 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978725 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerDied","Data":"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978770 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerDied","Data":"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978786 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerDied","Data":"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.986837 5136 generic.go:334] "Generic (PLEG): container finished" podID="f91601d4-11a0-4327-8f7e-6856df2b4643" containerID="6963baa6fe7d9db38870a70531888cfee8f7d44c3eff1597da33cf867ee591c8" exitCode=0 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.986985 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" event={"ID":"f91601d4-11a0-4327-8f7e-6856df2b4643","Type":"ContainerDied","Data":"6963baa6fe7d9db38870a70531888cfee8f7d44c3eff1597da33cf867ee591c8"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.987009 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" event={"ID":"f91601d4-11a0-4327-8f7e-6856df2b4643","Type":"ContainerStarted","Data":"af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.991591 5136 generic.go:334] "Generic (PLEG): container finished" podID="bfbcdb71-4e43-4243-a408-08d69b6d7328" containerID="edc5e28eb62af197edd849dc06e38cdd2bebac736971174a120fd4afd95e52b2" exitCode=0 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.991787 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2sj8m" event={"ID":"bfbcdb71-4e43-4243-a408-08d69b6d7328","Type":"ContainerDied","Data":"edc5e28eb62af197edd849dc06e38cdd2bebac736971174a120fd4afd95e52b2"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.995182 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xpg98" event={"ID":"fdfd9851-96cd-483e-9e66-b1cc255cb3e2","Type":"ContainerStarted","Data":"22e326ee5e74b9ee2e3ad6076eac75a725689f97883ae8bb80d1de284edb7a74"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.995276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xpg98" event={"ID":"fdfd9851-96cd-483e-9e66-b1cc255cb3e2","Type":"ContainerStarted","Data":"014c5744df20c0ca1d0890e233c317c3d3d1672f4d7a42bb3df502680f5a6c07"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.996535 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0f90-account-create-update-lv952" podStartSLOduration=1.996513582 podStartE2EDuration="1.996513582s" podCreationTimestamp="2026-03-20 07:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:21.987476831 +0000 UTC m=+1434.246787982" watchObservedRunningTime="2026-03-20 07:13:21.996513582 +0000 UTC m=+1434.255824733" Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.004702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" event={"ID":"7fd262d5-bfc7-49ae-908e-709fa9d0f55f","Type":"ContainerStarted","Data":"0d231656eec1735b1a5bc9e9719bbfb1dc2f5b357bdf072349808ab12f944278"} Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.004942 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" event={"ID":"7fd262d5-bfc7-49ae-908e-709fa9d0f55f","Type":"ContainerStarted","Data":"d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c"} Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.007167 5136 generic.go:334] "Generic (PLEG): container finished" podID="edb3559d-359a-4add-8216-afb68a19e111" containerID="89325ee63cd0d5963c16a3cd15b18e01966cac4c73b616f8222bb05ec0a94fbe" exitCode=0 Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.007275 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4jdnj" event={"ID":"edb3559d-359a-4add-8216-afb68a19e111","Type":"ContainerDied","Data":"89325ee63cd0d5963c16a3cd15b18e01966cac4c73b616f8222bb05ec0a94fbe"} Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.007357 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4jdnj" event={"ID":"edb3559d-359a-4add-8216-afb68a19e111","Type":"ContainerStarted","Data":"984fdff2ffc75a0094b3b874f3d37a6807c50a6a1b4538f4b70820bcd412bea0"} Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.023981 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-xpg98" podStartSLOduration=2.023960397 podStartE2EDuration="2.023960397s" podCreationTimestamp="2026-03-20 07:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:22.015445622 +0000 UTC m=+1434.274756773" watchObservedRunningTime="2026-03-20 07:13:22.023960397 +0000 UTC m=+1434.283271548" Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.061752 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" podStartSLOduration=2.061733682 podStartE2EDuration="2.061733682s" podCreationTimestamp="2026-03-20 07:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:22.044112263 +0000 UTC m=+1434.303423424" watchObservedRunningTime="2026-03-20 07:13:22.061733682 +0000 UTC m=+1434.321044833" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.016200 5136 generic.go:334] "Generic (PLEG): container finished" podID="7fd262d5-bfc7-49ae-908e-709fa9d0f55f" containerID="0d231656eec1735b1a5bc9e9719bbfb1dc2f5b357bdf072349808ab12f944278" exitCode=0 Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.016276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" event={"ID":"7fd262d5-bfc7-49ae-908e-709fa9d0f55f","Type":"ContainerDied","Data":"0d231656eec1735b1a5bc9e9719bbfb1dc2f5b357bdf072349808ab12f944278"} Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.017792 5136 generic.go:334] "Generic (PLEG): container finished" podID="2a1492b7-73df-440c-9246-ae0e3c2e8802" containerID="76ecc1efaf0109117b2021b8dc8f89423ec738c34b9f16b5ea8ada8e167cdf99" exitCode=0 Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.017875 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-lv952" event={"ID":"2a1492b7-73df-440c-9246-ae0e3c2e8802","Type":"ContainerDied","Data":"76ecc1efaf0109117b2021b8dc8f89423ec738c34b9f16b5ea8ada8e167cdf99"} Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.020767 5136 generic.go:334] "Generic (PLEG): container finished" podID="fdfd9851-96cd-483e-9e66-b1cc255cb3e2" containerID="22e326ee5e74b9ee2e3ad6076eac75a725689f97883ae8bb80d1de284edb7a74" exitCode=0 Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.020872 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xpg98" event={"ID":"fdfd9851-96cd-483e-9e66-b1cc255cb3e2","Type":"ContainerDied","Data":"22e326ee5e74b9ee2e3ad6076eac75a725689f97883ae8bb80d1de284edb7a74"} Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.389592 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.473282 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts\") pod \"f91601d4-11a0-4327-8f7e-6856df2b4643\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.473438 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5xcq\" (UniqueName: \"kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq\") pod \"f91601d4-11a0-4327-8f7e-6856df2b4643\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.474106 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f91601d4-11a0-4327-8f7e-6856df2b4643" (UID: "f91601d4-11a0-4327-8f7e-6856df2b4643"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.474781 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.479660 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq" (OuterVolumeSpecName: "kube-api-access-j5xcq") pod "f91601d4-11a0-4327-8f7e-6856df2b4643" (UID: "f91601d4-11a0-4327-8f7e-6856df2b4643"). InnerVolumeSpecName "kube-api-access-j5xcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.482194 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.525149 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.576161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t25k2\" (UniqueName: \"kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2\") pod \"bfbcdb71-4e43-4243-a408-08d69b6d7328\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.576214 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts\") pod \"bfbcdb71-4e43-4243-a408-08d69b6d7328\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.576703 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5xcq\" (UniqueName: \"kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.576701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfbcdb71-4e43-4243-a408-08d69b6d7328" (UID: "bfbcdb71-4e43-4243-a408-08d69b6d7328"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.580127 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2" (OuterVolumeSpecName: "kube-api-access-t25k2") pod "bfbcdb71-4e43-4243-a408-08d69b6d7328" (UID: "bfbcdb71-4e43-4243-a408-08d69b6d7328"). InnerVolumeSpecName "kube-api-access-t25k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.677968 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts\") pod \"edb3559d-359a-4add-8216-afb68a19e111\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.678173 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvtrm\" (UniqueName: \"kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm\") pod \"edb3559d-359a-4add-8216-afb68a19e111\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.678356 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "edb3559d-359a-4add-8216-afb68a19e111" (UID: "edb3559d-359a-4add-8216-afb68a19e111"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.678601 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t25k2\" (UniqueName: \"kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.678623 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.678635 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.681515 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm" (OuterVolumeSpecName: "kube-api-access-dvtrm") pod "edb3559d-359a-4add-8216-afb68a19e111" (UID: "edb3559d-359a-4add-8216-afb68a19e111"). InnerVolumeSpecName "kube-api-access-dvtrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.780839 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvtrm\" (UniqueName: \"kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.032053 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" event={"ID":"f91601d4-11a0-4327-8f7e-6856df2b4643","Type":"ContainerDied","Data":"af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423"} Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.032121 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.032067 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.045384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2sj8m" event={"ID":"bfbcdb71-4e43-4243-a408-08d69b6d7328","Type":"ContainerDied","Data":"df133601606c7da3d6d3058e70578121e914114c2561a58b46f39470ce91218f"} Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.045419 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df133601606c7da3d6d3058e70578121e914114c2561a58b46f39470ce91218f" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.045421 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.047453 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4jdnj" event={"ID":"edb3559d-359a-4add-8216-afb68a19e111","Type":"ContainerDied","Data":"984fdff2ffc75a0094b3b874f3d37a6807c50a6a1b4538f4b70820bcd412bea0"} Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.047489 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="984fdff2ffc75a0094b3b874f3d37a6807c50a6a1b4538f4b70820bcd412bea0" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.047491 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.571779 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.589944 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.600540 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.627661 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.627945 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-log" containerID="cri-o://0d813176fbff380f2ecf1396ef58dbd6653c9f7fc00b3a5aa2671557b6efffb1" gracePeriod=30 Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.628001 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-httpd" containerID="cri-o://0955f2ff6e58a181eb4657826df44412140ece0a092f87584721929f1c23cd5d" gracePeriod=30 Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.692884 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts\") pod \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693377 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk2sf\" (UniqueName: \"kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf\") pod \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693467 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts\") pod \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693560 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts\") pod \"2a1492b7-73df-440c-9246-ae0e3c2e8802\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693651 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9wlx\" (UniqueName: \"kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx\") pod \"2a1492b7-73df-440c-9246-ae0e3c2e8802\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693751 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdfd9851-96cd-483e-9e66-b1cc255cb3e2" (UID: "fdfd9851-96cd-483e-9e66-b1cc255cb3e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693756 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm9sg\" (UniqueName: \"kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg\") pod \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.694570 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693884 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fd262d5-bfc7-49ae-908e-709fa9d0f55f" (UID: "7fd262d5-bfc7-49ae-908e-709fa9d0f55f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.694179 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a1492b7-73df-440c-9246-ae0e3c2e8802" (UID: "2a1492b7-73df-440c-9246-ae0e3c2e8802"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.699873 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx" (OuterVolumeSpecName: "kube-api-access-n9wlx") pod "2a1492b7-73df-440c-9246-ae0e3c2e8802" (UID: "2a1492b7-73df-440c-9246-ae0e3c2e8802"). InnerVolumeSpecName "kube-api-access-n9wlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.700558 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf" (OuterVolumeSpecName: "kube-api-access-vk2sf") pod "7fd262d5-bfc7-49ae-908e-709fa9d0f55f" (UID: "7fd262d5-bfc7-49ae-908e-709fa9d0f55f"). InnerVolumeSpecName "kube-api-access-vk2sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.705917 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg" (OuterVolumeSpecName: "kube-api-access-xm9sg") pod "fdfd9851-96cd-483e-9e66-b1cc255cb3e2" (UID: "fdfd9851-96cd-483e-9e66-b1cc255cb3e2"). InnerVolumeSpecName "kube-api-access-xm9sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.796409 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm9sg\" (UniqueName: \"kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.796466 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.796479 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk2sf\" (UniqueName: \"kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.796492 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.796503 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9wlx\" (UniqueName: \"kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.057667 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.057953 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-lv952" event={"ID":"2a1492b7-73df-440c-9246-ae0e3c2e8802","Type":"ContainerDied","Data":"0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e"} Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.058013 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.060330 5136 generic.go:334] "Generic (PLEG): container finished" podID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerID="0d813176fbff380f2ecf1396ef58dbd6653c9f7fc00b3a5aa2671557b6efffb1" exitCode=143 Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.060410 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerDied","Data":"0d813176fbff380f2ecf1396ef58dbd6653c9f7fc00b3a5aa2671557b6efffb1"} Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.062506 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.062867 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xpg98" event={"ID":"fdfd9851-96cd-483e-9e66-b1cc255cb3e2","Type":"ContainerDied","Data":"014c5744df20c0ca1d0890e233c317c3d3d1672f4d7a42bb3df502680f5a6c07"} Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.062980 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="014c5744df20c0ca1d0890e233c317c3d3d1672f4d7a42bb3df502680f5a6c07" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.064600 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" event={"ID":"7fd262d5-bfc7-49ae-908e-709fa9d0f55f","Type":"ContainerDied","Data":"d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c"} Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.064625 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.064678 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.731732 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.733435 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-log" containerID="cri-o://2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12" gracePeriod=30 Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.734354 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-httpd" containerID="cri-o://7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16" gracePeriod=30 Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.848409 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922154 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922558 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922661 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922728 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922754 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922780 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922879 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5ppp\" (UniqueName: \"kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.923390 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.923411 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.928492 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts" (OuterVolumeSpecName: "scripts") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.932988 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp" (OuterVolumeSpecName: "kube-api-access-b5ppp") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "kube-api-access-b5ppp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.956655 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.996132 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.015242 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data" (OuterVolumeSpecName: "config-data") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.029846 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.029881 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.029919 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5ppp\" (UniqueName: \"kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.029930 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.029941 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.076242 5136 generic.go:334] "Generic (PLEG): container finished" podID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerID="2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12" exitCode=143 Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.076303 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerDied","Data":"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12"} Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.079134 5136 generic.go:334] "Generic (PLEG): container finished" podID="970a5b8d-94f8-4638-b351-40867c27568a" containerID="5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5" exitCode=0 Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.079166 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerDied","Data":"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5"} Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.079187 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerDied","Data":"b1ecb08e86fb97ca8048b41dfca62020c1e1817555ee06471950360149ce18be"} Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.079204 5136 scope.go:117] "RemoveContainer" containerID="34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.079242 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.105059 5136 scope.go:117] "RemoveContainer" containerID="ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.120636 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.129644 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141428 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141758 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1492b7-73df-440c-9246-ae0e3c2e8802" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141772 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1492b7-73df-440c-9246-ae0e3c2e8802" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141790 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-notification-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141798 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-notification-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141822 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="proxy-httpd" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141829 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="proxy-httpd" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141840 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91601d4-11a0-4327-8f7e-6856df2b4643" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141847 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91601d4-11a0-4327-8f7e-6856df2b4643" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141855 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="sg-core" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141861 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="sg-core" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141872 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-central-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141877 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-central-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141888 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfbcdb71-4e43-4243-a408-08d69b6d7328" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141894 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfbcdb71-4e43-4243-a408-08d69b6d7328" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141904 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb3559d-359a-4add-8216-afb68a19e111" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141910 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb3559d-359a-4add-8216-afb68a19e111" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141919 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd262d5-bfc7-49ae-908e-709fa9d0f55f" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141924 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd262d5-bfc7-49ae-908e-709fa9d0f55f" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141941 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfd9851-96cd-483e-9e66-b1cc255cb3e2" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141947 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfd9851-96cd-483e-9e66-b1cc255cb3e2" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142113 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="sg-core" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142124 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="proxy-httpd" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142132 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd262d5-bfc7-49ae-908e-709fa9d0f55f" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142144 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfd9851-96cd-483e-9e66-b1cc255cb3e2" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142154 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91601d4-11a0-4327-8f7e-6856df2b4643" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142169 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-central-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142184 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb3559d-359a-4add-8216-afb68a19e111" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142196 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfbcdb71-4e43-4243-a408-08d69b6d7328" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142211 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1492b7-73df-440c-9246-ae0e3c2e8802" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142218 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-notification-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.143696 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.144873 5136 scope.go:117] "RemoveContainer" containerID="b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.152846 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.153013 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.165681 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.191079 5136 scope.go:117] "RemoveContainer" containerID="5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.207588 5136 scope.go:117] "RemoveContainer" containerID="34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.208177 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11\": container with ID starting with 34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11 not found: ID does not exist" containerID="34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.208245 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11"} err="failed to get container status \"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11\": rpc error: code = NotFound desc = could not find container \"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11\": container with ID starting with 34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11 not found: ID does not exist" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.208275 5136 scope.go:117] "RemoveContainer" containerID="ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.209923 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49\": container with ID starting with ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49 not found: ID does not exist" containerID="ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.209971 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49"} err="failed to get container status \"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49\": rpc error: code = NotFound desc = could not find container \"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49\": container with ID starting with ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49 not found: ID does not exist" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.209997 5136 scope.go:117] "RemoveContainer" containerID="b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.210385 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb\": container with ID starting with b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb not found: ID does not exist" containerID="b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.210415 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb"} err="failed to get container status \"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb\": rpc error: code = NotFound desc = could not find container \"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb\": container with ID starting with b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb not found: ID does not exist" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.210457 5136 scope.go:117] "RemoveContainer" containerID="5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.210782 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5\": container with ID starting with 5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5 not found: ID does not exist" containerID="5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.210859 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5"} err="failed to get container status \"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5\": rpc error: code = NotFound desc = could not find container \"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5\": container with ID starting with 5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5 not found: ID does not exist" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235166 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcz6\" (UniqueName: \"kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235216 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235281 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235322 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235339 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235478 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235571 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.337751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.338115 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.339269 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjcz6\" (UniqueName: \"kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.339455 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.339915 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.340208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.340293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.338382 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.340796 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.342098 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.344251 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.345446 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.346427 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.359634 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjcz6\" (UniqueName: \"kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.413591 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970a5b8d-94f8-4638-b351-40867c27568a" path="/var/lib/kubelet/pods/970a5b8d-94f8-4638-b351-40867c27568a/volumes" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.462730 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: W0320 07:13:26.946989 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe945390_82b0_4512_8028_a0207cd7796b.slice/crio-5cb6cc418050d60bffd563fe7ea3892f2ad06634082325336f1fe95bdace7d2b WatchSource:0}: Error finding container 5cb6cc418050d60bffd563fe7ea3892f2ad06634082325336f1fe95bdace7d2b: Status 404 returned error can't find the container with id 5cb6cc418050d60bffd563fe7ea3892f2ad06634082325336f1fe95bdace7d2b Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.948755 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:27 crc kubenswrapper[5136]: I0320 07:13:27.089023 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerStarted","Data":"5cb6cc418050d60bffd563fe7ea3892f2ad06634082325336f1fe95bdace7d2b"} Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.105059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerStarted","Data":"b5b1edae1009c5322668a89756ee1e94a710a5cf94e54c2867151f60883078bc"} Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.127895 5136 generic.go:334] "Generic (PLEG): container finished" podID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerID="0955f2ff6e58a181eb4657826df44412140ece0a092f87584721929f1c23cd5d" exitCode=0 Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.127935 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerDied","Data":"0955f2ff6e58a181eb4657826df44412140ece0a092f87584721929f1c23cd5d"} Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.211954 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277415 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrqvz\" (UniqueName: \"kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277489 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277553 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277589 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277608 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277687 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277711 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.282392 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs" (OuterVolumeSpecName: "logs") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.284584 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.285464 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts" (OuterVolumeSpecName: "scripts") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.287176 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.289565 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz" (OuterVolumeSpecName: "kube-api-access-zrqvz") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "kube-api-access-zrqvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.309026 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.369275 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data" (OuterVolumeSpecName: "config-data") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.378150 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379536 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrqvz\" (UniqueName: \"kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379556 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379565 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379574 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379582 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379591 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379598 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379626 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.412784 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.481487 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.037139 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:46442->10.217.0.152:9292: read: connection reset by peer" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.037149 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:46436->10.217.0.152:9292: read: connection reset by peer" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.138146 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerStarted","Data":"34fa52e053cd85fb4fd84973fd16df7f7008a21e7c7592737d31a09951fc533c"} Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.150347 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerDied","Data":"6c6e6f554a2daf084995a53a820c6c15f7723c013708ede47a5e369f225ae2a2"} Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.150397 5136 scope.go:117] "RemoveContainer" containerID="0955f2ff6e58a181eb4657826df44412140ece0a092f87584721929f1c23cd5d" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.150405 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.201748 5136 scope.go:117] "RemoveContainer" containerID="0d813176fbff380f2ecf1396ef58dbd6653c9f7fc00b3a5aa2671557b6efffb1" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.238174 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.246487 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.252899 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:29 crc kubenswrapper[5136]: E0320 07:13:29.253231 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-httpd" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.253244 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-httpd" Mar 20 07:13:29 crc kubenswrapper[5136]: E0320 07:13:29.253268 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-log" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.253273 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-log" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.253450 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-log" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.253467 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-httpd" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.254269 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.287589 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.287863 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.294721 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.397860 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8jnh\" (UniqueName: \"kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.397897 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.397935 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.397985 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.398013 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.398030 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.398063 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.398108 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499359 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499675 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499695 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499730 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499777 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499823 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8jnh\" (UniqueName: \"kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499844 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499883 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.501789 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.501835 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.502163 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.520280 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.520885 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.521408 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.521473 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.524971 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8jnh\" (UniqueName: \"kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.579606 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.610077 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.730260 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806024 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806081 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806123 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806160 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806268 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx28m\" (UniqueName: \"kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806321 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806393 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806425 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.807292 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs" (OuterVolumeSpecName: "logs") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.807565 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.816432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m" (OuterVolumeSpecName: "kube-api-access-qx28m") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "kube-api-access-qx28m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.819246 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.819294 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts" (OuterVolumeSpecName: "scripts") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.857468 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.878707 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data" (OuterVolumeSpecName: "config-data") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.909956 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.909987 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.910017 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.910026 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.910039 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx28m\" (UniqueName: \"kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.910047 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.910056 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.914258 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.942201 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.012528 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.012565 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.164945 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerStarted","Data":"4dd1fda5ac18c23a32359ccddfe66ce995d3d62967fe3ba4801e95a4970c56fb"} Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.168183 5136 generic.go:334] "Generic (PLEG): container finished" podID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerID="7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16" exitCode=0 Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.168229 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerDied","Data":"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16"} Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.168252 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerDied","Data":"c7adb5a2a9c06da1244ea87915af7e6cfa3d0ce95887a1e45fa3f12d10155ea2"} Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.168273 5136 scope.go:117] "RemoveContainer" containerID="7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.168302 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.201937 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.219362 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.238295 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:30 crc kubenswrapper[5136]: E0320 07:13:30.238694 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-log" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.238707 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-log" Mar 20 07:13:30 crc kubenswrapper[5136]: E0320 07:13:30.238722 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-httpd" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.238727 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-httpd" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.238933 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-httpd" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.238954 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-log" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.240012 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.241988 5136 scope.go:117] "RemoveContainer" containerID="2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.242405 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.242613 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.284713 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.293116 5136 scope.go:117] "RemoveContainer" containerID="7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16" Mar 20 07:13:30 crc kubenswrapper[5136]: E0320 07:13:30.293585 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16\": container with ID starting with 7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16 not found: ID does not exist" containerID="7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.293617 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16"} err="failed to get container status \"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16\": rpc error: code = NotFound desc = could not find container \"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16\": container with ID starting with 7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16 not found: ID does not exist" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.293642 5136 scope.go:117] "RemoveContainer" containerID="2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12" Mar 20 07:13:30 crc kubenswrapper[5136]: E0320 07:13:30.293857 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12\": container with ID starting with 2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12 not found: ID does not exist" containerID="2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.293880 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12"} err="failed to get container status \"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12\": rpc error: code = NotFound desc = could not find container \"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12\": container with ID starting with 2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12 not found: ID does not exist" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.307973 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:30 crc kubenswrapper[5136]: W0320 07:13:30.308359 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe20adf9_d6e2_4487_a176_32ddd55eb051.slice/crio-9a85481c71cfaf5395cd3f9b7fc38785dc4f071aee32ec8ab4a9c0c94e256ebb WatchSource:0}: Error finding container 9a85481c71cfaf5395cd3f9b7fc38785dc4f071aee32ec8ab4a9c0c94e256ebb: Status 404 returned error can't find the container with id 9a85481c71cfaf5395cd3f9b7fc38785dc4f071aee32ec8ab4a9c0c94e256ebb Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.319797 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.319887 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.319919 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.319956 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.319993 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.320035 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.320082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl6kj\" (UniqueName: \"kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.320142 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.410351 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" path="/var/lib/kubelet/pods/5249fb5b-8908-4b21-9ea3-28508854ce4a/volumes" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.411183 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" path="/var/lib/kubelet/pods/f7a82425-91b7-43b8-b26e-ace42be9cdba/volumes" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423176 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl6kj\" (UniqueName: \"kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423319 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423422 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423518 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423564 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423646 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423685 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423697 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423772 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423914 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.424550 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.429394 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.429432 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.435047 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.435446 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.442454 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl6kj\" (UniqueName: \"kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.452929 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.573993 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.678783 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rzgpn"] Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.679883 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.682522 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.682523 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w7tkw" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.682678 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.694646 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rzgpn"] Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.729512 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hm6\" (UniqueName: \"kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.729757 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.729870 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.729925 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.833140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.833200 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.833241 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.833297 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hm6\" (UniqueName: \"kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.837018 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.837251 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.843534 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.855494 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hm6\" (UniqueName: \"kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.998214 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.191575 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerStarted","Data":"73dd53cb555a4926da6900afb36bb4071de4ca4b83ec9f1d5c6fe4f78cea9554"} Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.192518 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.207280 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerStarted","Data":"ddf75c942dcbf834dfd88d5bc8a1e8a0fa00deb6223f84700bb4c75bb0cce612"} Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.207329 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerStarted","Data":"9a85481c71cfaf5395cd3f9b7fc38785dc4f071aee32ec8ab4a9c0c94e256ebb"} Mar 20 07:13:31 crc kubenswrapper[5136]: W0320 07:13:31.220893 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod141e5942_2bf9_424c_a6a7_7c93afdad7dc.slice/crio-05a82ec3ba1c0f76f4e62724ce02ba143154e57bc0f0c3ee005d1f4b00278ffd WatchSource:0}: Error finding container 05a82ec3ba1c0f76f4e62724ce02ba143154e57bc0f0c3ee005d1f4b00278ffd: Status 404 returned error can't find the container with id 05a82ec3ba1c0f76f4e62724ce02ba143154e57bc0f0c3ee005d1f4b00278ffd Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.227329 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.228588 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.266481444 podStartE2EDuration="5.228573095s" podCreationTimestamp="2026-03-20 07:13:26 +0000 UTC" firstStartedPulling="2026-03-20 07:13:26.949037385 +0000 UTC m=+1439.208348546" lastFinishedPulling="2026-03-20 07:13:30.911129046 +0000 UTC m=+1443.170440197" observedRunningTime="2026-03-20 07:13:31.214786856 +0000 UTC m=+1443.474098007" watchObservedRunningTime="2026-03-20 07:13:31.228573095 +0000 UTC m=+1443.487884246" Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.444409 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.487985 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rzgpn"] Mar 20 07:13:31 crc kubenswrapper[5136]: W0320 07:13:31.508269 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e901a54_c442_45fd_a0d8_1568f850efb4.slice/crio-811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652 WatchSource:0}: Error finding container 811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652: Status 404 returned error can't find the container with id 811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652 Mar 20 07:13:32 crc kubenswrapper[5136]: I0320 07:13:32.223300 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerStarted","Data":"662932f3f7c10a1e5293dce36be1a7b6f6fe4dc40e2e2d324b7c68188c034162"} Mar 20 07:13:32 crc kubenswrapper[5136]: I0320 07:13:32.223997 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerStarted","Data":"05a82ec3ba1c0f76f4e62724ce02ba143154e57bc0f0c3ee005d1f4b00278ffd"} Mar 20 07:13:32 crc kubenswrapper[5136]: I0320 07:13:32.226373 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" event={"ID":"2e901a54-c442-45fd-a0d8-1568f850efb4","Type":"ContainerStarted","Data":"811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652"} Mar 20 07:13:32 crc kubenswrapper[5136]: I0320 07:13:32.231492 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerStarted","Data":"08811e57d5ae08f29bf6ac8aa7f95e929dc4a7c310d13f900fb2c645979418d6"} Mar 20 07:13:32 crc kubenswrapper[5136]: I0320 07:13:32.269068 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.269047884 podStartE2EDuration="3.269047884s" podCreationTimestamp="2026-03-20 07:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:32.249224507 +0000 UTC m=+1444.508535658" watchObservedRunningTime="2026-03-20 07:13:32.269047884 +0000 UTC m=+1444.528359055" Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.245511 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerStarted","Data":"3cd1e6e7f78367e01aa376387fc42404408757a25929ca8c639774857d99ccfd"} Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.246490 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-central-agent" containerID="cri-o://b5b1edae1009c5322668a89756ee1e94a710a5cf94e54c2867151f60883078bc" gracePeriod=30 Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.246601 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="proxy-httpd" containerID="cri-o://73dd53cb555a4926da6900afb36bb4071de4ca4b83ec9f1d5c6fe4f78cea9554" gracePeriod=30 Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.246648 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="sg-core" containerID="cri-o://4dd1fda5ac18c23a32359ccddfe66ce995d3d62967fe3ba4801e95a4970c56fb" gracePeriod=30 Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.246689 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-notification-agent" containerID="cri-o://34fa52e053cd85fb4fd84973fd16df7f7008a21e7c7592737d31a09951fc533c" gracePeriod=30 Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.285550 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.285534187 podStartE2EDuration="3.285534187s" podCreationTimestamp="2026-03-20 07:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:33.271327555 +0000 UTC m=+1445.530638706" watchObservedRunningTime="2026-03-20 07:13:33.285534187 +0000 UTC m=+1445.544845328" Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.255937 5136 generic.go:334] "Generic (PLEG): container finished" podID="be945390-82b0-4512-8028-a0207cd7796b" containerID="73dd53cb555a4926da6900afb36bb4071de4ca4b83ec9f1d5c6fe4f78cea9554" exitCode=0 Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.255972 5136 generic.go:334] "Generic (PLEG): container finished" podID="be945390-82b0-4512-8028-a0207cd7796b" containerID="4dd1fda5ac18c23a32359ccddfe66ce995d3d62967fe3ba4801e95a4970c56fb" exitCode=2 Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.255982 5136 generic.go:334] "Generic (PLEG): container finished" podID="be945390-82b0-4512-8028-a0207cd7796b" containerID="34fa52e053cd85fb4fd84973fd16df7f7008a21e7c7592737d31a09951fc533c" exitCode=0 Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.256175 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerDied","Data":"73dd53cb555a4926da6900afb36bb4071de4ca4b83ec9f1d5c6fe4f78cea9554"} Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.256214 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerDied","Data":"4dd1fda5ac18c23a32359ccddfe66ce995d3d62967fe3ba4801e95a4970c56fb"} Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.256228 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerDied","Data":"34fa52e053cd85fb4fd84973fd16df7f7008a21e7c7592737d31a09951fc533c"} Mar 20 07:13:37 crc kubenswrapper[5136]: I0320 07:13:37.295043 5136 generic.go:334] "Generic (PLEG): container finished" podID="be945390-82b0-4512-8028-a0207cd7796b" containerID="b5b1edae1009c5322668a89756ee1e94a710a5cf94e54c2867151f60883078bc" exitCode=0 Mar 20 07:13:37 crc kubenswrapper[5136]: I0320 07:13:37.295101 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerDied","Data":"b5b1edae1009c5322668a89756ee1e94a710a5cf94e54c2867151f60883078bc"} Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.941174 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996310 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996359 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjcz6\" (UniqueName: \"kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996394 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996415 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996464 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996487 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.997085 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.997112 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.997137 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.997434 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.997456 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.002237 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts" (OuterVolumeSpecName: "scripts") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.004943 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6" (OuterVolumeSpecName: "kube-api-access-xjcz6") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "kube-api-access-xjcz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.037057 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.069022 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.088906 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data" (OuterVolumeSpecName: "config-data") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.098218 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.098250 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.098263 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjcz6\" (UniqueName: \"kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.098271 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.098279 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.315653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerDied","Data":"5cb6cc418050d60bffd563fe7ea3892f2ad06634082325336f1fe95bdace7d2b"} Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.316042 5136 scope.go:117] "RemoveContainer" containerID="73dd53cb555a4926da6900afb36bb4071de4ca4b83ec9f1d5c6fe4f78cea9554" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.315928 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.317831 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" event={"ID":"2e901a54-c442-45fd-a0d8-1568f850efb4","Type":"ContainerStarted","Data":"a6ef812d133f600bf2d930bb51a86bd9525704f16b2a680bf46ad2719737b44f"} Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.337103 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" podStartSLOduration=2.184762052 podStartE2EDuration="9.337087932s" podCreationTimestamp="2026-03-20 07:13:30 +0000 UTC" firstStartedPulling="2026-03-20 07:13:31.511242932 +0000 UTC m=+1443.770554083" lastFinishedPulling="2026-03-20 07:13:38.663568802 +0000 UTC m=+1450.922879963" observedRunningTime="2026-03-20 07:13:39.332451468 +0000 UTC m=+1451.591762639" watchObservedRunningTime="2026-03-20 07:13:39.337087932 +0000 UTC m=+1451.596399073" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.344464 5136 scope.go:117] "RemoveContainer" containerID="4dd1fda5ac18c23a32359ccddfe66ce995d3d62967fe3ba4801e95a4970c56fb" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.357199 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.373443 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383012 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:39 crc kubenswrapper[5136]: E0320 07:13:39.383352 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-central-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383369 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-central-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: E0320 07:13:39.383379 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="proxy-httpd" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383386 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="proxy-httpd" Mar 20 07:13:39 crc kubenswrapper[5136]: E0320 07:13:39.383403 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="sg-core" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383409 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="sg-core" Mar 20 07:13:39 crc kubenswrapper[5136]: E0320 07:13:39.383418 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-notification-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383425 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-notification-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383578 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-notification-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383594 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="proxy-httpd" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383609 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="sg-core" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383623 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-central-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.384026 5136 scope.go:117] "RemoveContainer" containerID="34fa52e053cd85fb4fd84973fd16df7f7008a21e7c7592737d31a09951fc533c" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.385217 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.387431 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.388371 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.399727 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.439931 5136 scope.go:117] "RemoveContainer" containerID="b5b1edae1009c5322668a89756ee1e94a710a5cf94e54c2867151f60883078bc" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529360 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfnb4\" (UniqueName: \"kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529430 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529461 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529505 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529534 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529591 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529615 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.611700 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.611844 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632087 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632238 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632279 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632387 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfnb4\" (UniqueName: \"kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632454 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632497 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632569 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632745 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.633392 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.637472 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.637695 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.638284 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.651327 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.651494 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfnb4\" (UniqueName: \"kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.652214 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.684031 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.723180 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:40 crc kubenswrapper[5136]: W0320 07:13:40.151879 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8f028e7_076a_4fa6_93de_08842dc040f8.slice/crio-8d5e25f452146f28927e0e3f67a50de6aecd3be4f4695a36c4794d1a9ea5eb5c WatchSource:0}: Error finding container 8d5e25f452146f28927e0e3f67a50de6aecd3be4f4695a36c4794d1a9ea5eb5c: Status 404 returned error can't find the container with id 8d5e25f452146f28927e0e3f67a50de6aecd3be4f4695a36c4794d1a9ea5eb5c Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.154027 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.337065 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerStarted","Data":"8d5e25f452146f28927e0e3f67a50de6aecd3be4f4695a36c4794d1a9ea5eb5c"} Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.337562 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.337587 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.439725 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be945390-82b0-4512-8028-a0207cd7796b" path="/var/lib/kubelet/pods/be945390-82b0-4512-8028-a0207cd7796b/volumes" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.574273 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.574325 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.618553 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.627510 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:41 crc kubenswrapper[5136]: I0320 07:13:41.345987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerStarted","Data":"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3"} Mar 20 07:13:41 crc kubenswrapper[5136]: I0320 07:13:41.346337 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:41 crc kubenswrapper[5136]: I0320 07:13:41.346356 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:42 crc kubenswrapper[5136]: I0320 07:13:42.164681 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 07:13:42 crc kubenswrapper[5136]: I0320 07:13:42.167329 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 07:13:42 crc kubenswrapper[5136]: I0320 07:13:42.357584 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerStarted","Data":"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c"} Mar 20 07:13:42 crc kubenswrapper[5136]: I0320 07:13:42.357619 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerStarted","Data":"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6"} Mar 20 07:13:43 crc kubenswrapper[5136]: I0320 07:13:43.315423 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:43 crc kubenswrapper[5136]: I0320 07:13:43.410112 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:43 crc kubenswrapper[5136]: I0320 07:13:43.410279 5136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 07:13:43 crc kubenswrapper[5136]: I0320 07:13:43.474568 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374331 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-central-agent" containerID="cri-o://c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3" gracePeriod=30 Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374642 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerStarted","Data":"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132"} Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374695 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374723 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="proxy-httpd" containerID="cri-o://e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132" gracePeriod=30 Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374791 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="sg-core" containerID="cri-o://42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c" gracePeriod=30 Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374842 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-notification-agent" containerID="cri-o://4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6" gracePeriod=30 Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.408655 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.991644474 podStartE2EDuration="5.40864064s" podCreationTimestamp="2026-03-20 07:13:39 +0000 UTC" firstStartedPulling="2026-03-20 07:13:40.15576813 +0000 UTC m=+1452.415079281" lastFinishedPulling="2026-03-20 07:13:43.572764306 +0000 UTC m=+1455.832075447" observedRunningTime="2026-03-20 07:13:44.40450185 +0000 UTC m=+1456.663812991" watchObservedRunningTime="2026-03-20 07:13:44.40864064 +0000 UTC m=+1456.667951781" Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.385908 5136 generic.go:334] "Generic (PLEG): container finished" podID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerID="e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132" exitCode=0 Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.386224 5136 generic.go:334] "Generic (PLEG): container finished" podID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerID="42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c" exitCode=2 Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.386234 5136 generic.go:334] "Generic (PLEG): container finished" podID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerID="4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6" exitCode=0 Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.385941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerDied","Data":"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132"} Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.386263 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerDied","Data":"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c"} Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.386273 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerDied","Data":"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6"} Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.080474 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.199952 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200577 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200592 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200630 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200669 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfnb4\" (UniqueName: \"kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200704 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200767 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.201522 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.202136 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.205883 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4" (OuterVolumeSpecName: "kube-api-access-lfnb4") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "kube-api-access-lfnb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.207721 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts" (OuterVolumeSpecName: "scripts") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.243344 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.276613 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.295786 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data" (OuterVolumeSpecName: "config-data") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303302 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303337 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303346 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303356 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303365 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfnb4\" (UniqueName: \"kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303373 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303384 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.406937 5136 generic.go:334] "Generic (PLEG): container finished" podID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerID="c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3" exitCode=0 Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.406987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerDied","Data":"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3"} Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.407048 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.407070 5136 scope.go:117] "RemoveContainer" containerID="e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.407054 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerDied","Data":"8d5e25f452146f28927e0e3f67a50de6aecd3be4f4695a36c4794d1a9ea5eb5c"} Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.439466 5136 scope.go:117] "RemoveContainer" containerID="42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.452203 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.460263 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.462601 5136 scope.go:117] "RemoveContainer" containerID="4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.473649 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.474042 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="proxy-httpd" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474055 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="proxy-httpd" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.474069 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-notification-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474076 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-notification-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.474084 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="sg-core" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474090 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="sg-core" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.474102 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-central-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474109 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-central-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474278 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-notification-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474294 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-central-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474310 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="proxy-httpd" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474319 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="sg-core" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.475777 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.477781 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.478197 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.482689 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.507389 5136 scope.go:117] "RemoveContainer" containerID="c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.527248 5136 scope.go:117] "RemoveContainer" containerID="e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.528944 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132\": container with ID starting with e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132 not found: ID does not exist" containerID="e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.529077 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132"} err="failed to get container status \"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132\": rpc error: code = NotFound desc = could not find container \"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132\": container with ID starting with e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132 not found: ID does not exist" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.529177 5136 scope.go:117] "RemoveContainer" containerID="42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.529707 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c\": container with ID starting with 42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c not found: ID does not exist" containerID="42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.529738 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c"} err="failed to get container status \"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c\": rpc error: code = NotFound desc = could not find container \"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c\": container with ID starting with 42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c not found: ID does not exist" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.529757 5136 scope.go:117] "RemoveContainer" containerID="4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.530067 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6\": container with ID starting with 4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6 not found: ID does not exist" containerID="4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.530094 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6"} err="failed to get container status \"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6\": rpc error: code = NotFound desc = could not find container \"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6\": container with ID starting with 4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6 not found: ID does not exist" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.530110 5136 scope.go:117] "RemoveContainer" containerID="c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.530423 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3\": container with ID starting with c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3 not found: ID does not exist" containerID="c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.530464 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3"} err="failed to get container status \"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3\": rpc error: code = NotFound desc = could not find container \"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3\": container with ID starting with c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3 not found: ID does not exist" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607319 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607442 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59dlm\" (UniqueName: \"kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607514 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607539 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607570 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607594 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607664 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.708953 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59dlm\" (UniqueName: \"kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.709334 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.709512 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.709751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.709930 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.710166 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.710360 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.710390 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.711081 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.715326 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.716893 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.717295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.719806 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.731035 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59dlm\" (UniqueName: \"kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.793862 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:48 crc kubenswrapper[5136]: W0320 07:13:48.283580 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b351b6a_5365_40cf_9d42_c6d4df7cc48b.slice/crio-27ead29dd635d8c64178622abcb2bb11b72e1cbf565e64de2bb239a0882424b4 WatchSource:0}: Error finding container 27ead29dd635d8c64178622abcb2bb11b72e1cbf565e64de2bb239a0882424b4: Status 404 returned error can't find the container with id 27ead29dd635d8c64178622abcb2bb11b72e1cbf565e64de2bb239a0882424b4 Mar 20 07:13:48 crc kubenswrapper[5136]: I0320 07:13:48.286624 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:48 crc kubenswrapper[5136]: I0320 07:13:48.415633 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" path="/var/lib/kubelet/pods/e8f028e7-076a-4fa6-93de-08842dc040f8/volumes" Mar 20 07:13:48 crc kubenswrapper[5136]: I0320 07:13:48.422102 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerStarted","Data":"27ead29dd635d8c64178622abcb2bb11b72e1cbf565e64de2bb239a0882424b4"} Mar 20 07:13:49 crc kubenswrapper[5136]: I0320 07:13:49.431517 5136 generic.go:334] "Generic (PLEG): container finished" podID="2e901a54-c442-45fd-a0d8-1568f850efb4" containerID="a6ef812d133f600bf2d930bb51a86bd9525704f16b2a680bf46ad2719737b44f" exitCode=0 Mar 20 07:13:49 crc kubenswrapper[5136]: I0320 07:13:49.431570 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" event={"ID":"2e901a54-c442-45fd-a0d8-1568f850efb4","Type":"ContainerDied","Data":"a6ef812d133f600bf2d930bb51a86bd9525704f16b2a680bf46ad2719737b44f"} Mar 20 07:13:49 crc kubenswrapper[5136]: I0320 07:13:49.434046 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerStarted","Data":"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b"} Mar 20 07:13:50 crc kubenswrapper[5136]: I0320 07:13:50.920500 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.093389 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data\") pod \"2e901a54-c442-45fd-a0d8-1568f850efb4\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.093527 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts\") pod \"2e901a54-c442-45fd-a0d8-1568f850efb4\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.093566 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle\") pod \"2e901a54-c442-45fd-a0d8-1568f850efb4\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.093608 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4hm6\" (UniqueName: \"kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6\") pod \"2e901a54-c442-45fd-a0d8-1568f850efb4\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.106602 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts" (OuterVolumeSpecName: "scripts") pod "2e901a54-c442-45fd-a0d8-1568f850efb4" (UID: "2e901a54-c442-45fd-a0d8-1568f850efb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.106811 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6" (OuterVolumeSpecName: "kube-api-access-w4hm6") pod "2e901a54-c442-45fd-a0d8-1568f850efb4" (UID: "2e901a54-c442-45fd-a0d8-1568f850efb4"). InnerVolumeSpecName "kube-api-access-w4hm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.121630 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data" (OuterVolumeSpecName: "config-data") pod "2e901a54-c442-45fd-a0d8-1568f850efb4" (UID: "2e901a54-c442-45fd-a0d8-1568f850efb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.121651 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e901a54-c442-45fd-a0d8-1568f850efb4" (UID: "2e901a54-c442-45fd-a0d8-1568f850efb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.195549 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.195581 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.195611 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4hm6\" (UniqueName: \"kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.195621 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.449728 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" event={"ID":"2e901a54-c442-45fd-a0d8-1568f850efb4","Type":"ContainerDied","Data":"811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652"} Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.449765 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.449797 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.653772 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:13:51 crc kubenswrapper[5136]: E0320 07:13:51.654191 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e901a54-c442-45fd-a0d8-1568f850efb4" containerName="nova-cell0-conductor-db-sync" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.654213 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e901a54-c442-45fd-a0d8-1568f850efb4" containerName="nova-cell0-conductor-db-sync" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.654404 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e901a54-c442-45fd-a0d8-1568f850efb4" containerName="nova-cell0-conductor-db-sync" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.654987 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.657128 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w7tkw" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.658107 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.661440 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.803564 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.803623 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.803673 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j75vn\" (UniqueName: \"kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.905565 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j75vn\" (UniqueName: \"kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.905700 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.905736 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.909857 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.910451 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.921010 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j75vn\" (UniqueName: \"kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.989410 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:52 crc kubenswrapper[5136]: I0320 07:13:52.406931 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:13:52 crc kubenswrapper[5136]: W0320 07:13:52.412934 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38885968_65f8_45e9_8e72_7464d5e78b85.slice/crio-1952d35f8dd4e8ea612b2d2c603f4623b8d407450e957b72f0fa46a725392225 WatchSource:0}: Error finding container 1952d35f8dd4e8ea612b2d2c603f4623b8d407450e957b72f0fa46a725392225: Status 404 returned error can't find the container with id 1952d35f8dd4e8ea612b2d2c603f4623b8d407450e957b72f0fa46a725392225 Mar 20 07:13:52 crc kubenswrapper[5136]: I0320 07:13:52.463624 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerStarted","Data":"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0"} Mar 20 07:13:52 crc kubenswrapper[5136]: I0320 07:13:52.467557 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38885968-65f8-45e9-8e72-7464d5e78b85","Type":"ContainerStarted","Data":"1952d35f8dd4e8ea612b2d2c603f4623b8d407450e957b72f0fa46a725392225"} Mar 20 07:13:53 crc kubenswrapper[5136]: I0320 07:13:53.477752 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerStarted","Data":"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff"} Mar 20 07:13:53 crc kubenswrapper[5136]: I0320 07:13:53.479663 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38885968-65f8-45e9-8e72-7464d5e78b85","Type":"ContainerStarted","Data":"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff"} Mar 20 07:13:53 crc kubenswrapper[5136]: I0320 07:13:53.479820 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:54 crc kubenswrapper[5136]: I0320 07:13:54.495318 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerStarted","Data":"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614"} Mar 20 07:13:54 crc kubenswrapper[5136]: I0320 07:13:54.495710 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:13:54 crc kubenswrapper[5136]: I0320 07:13:54.530073 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.530048728 podStartE2EDuration="3.530048728s" podCreationTimestamp="2026-03-20 07:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:53.494642686 +0000 UTC m=+1465.753953877" watchObservedRunningTime="2026-03-20 07:13:54.530048728 +0000 UTC m=+1466.789359889" Mar 20 07:13:54 crc kubenswrapper[5136]: I0320 07:13:54.536896 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.577621578 podStartE2EDuration="7.53687763s" podCreationTimestamp="2026-03-20 07:13:47 +0000 UTC" firstStartedPulling="2026-03-20 07:13:48.285671683 +0000 UTC m=+1460.544982834" lastFinishedPulling="2026-03-20 07:13:54.244927715 +0000 UTC m=+1466.504238886" observedRunningTime="2026-03-20 07:13:54.521273855 +0000 UTC m=+1466.780584996" watchObservedRunningTime="2026-03-20 07:13:54.53687763 +0000 UTC m=+1466.796188791" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.135893 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566514-w8pvt"] Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.137747 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.140229 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.140425 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.141637 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.146844 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566514-w8pvt"] Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.259516 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkg6\" (UniqueName: \"kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6\") pod \"auto-csr-approver-29566514-w8pvt\" (UID: \"f034b011-ac81-4ef1-aa8b-39164a6c98ee\") " pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.360901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qkg6\" (UniqueName: \"kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6\") pod \"auto-csr-approver-29566514-w8pvt\" (UID: \"f034b011-ac81-4ef1-aa8b-39164a6c98ee\") " pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.380355 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qkg6\" (UniqueName: \"kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6\") pod \"auto-csr-approver-29566514-w8pvt\" (UID: \"f034b011-ac81-4ef1-aa8b-39164a6c98ee\") " pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.454400 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.913410 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566514-w8pvt"] Mar 20 07:14:01 crc kubenswrapper[5136]: I0320 07:14:01.593961 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" event={"ID":"f034b011-ac81-4ef1-aa8b-39164a6c98ee","Type":"ContainerStarted","Data":"39b50e4e3124f6d2ab50323d1ca1ded11a637bf1992d62e4284a2707b5108ecc"} Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.036442 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.496272 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-c5brf"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.497694 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.500207 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.501694 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.513427 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5brf"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.604719 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.604795 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.604887 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.604922 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c4z8\" (UniqueName: \"kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.617800 5136 generic.go:334] "Generic (PLEG): container finished" podID="f034b011-ac81-4ef1-aa8b-39164a6c98ee" containerID="d1609ae90ac31423489405692434f7f762e8aa11262621b19e053461b1226222" exitCode=0 Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.617877 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" event={"ID":"f034b011-ac81-4ef1-aa8b-39164a6c98ee","Type":"ContainerDied","Data":"d1609ae90ac31423489405692434f7f762e8aa11262621b19e053461b1226222"} Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.687978 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.689573 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.694472 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.709340 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.709468 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c4z8\" (UniqueName: \"kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.709561 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.709585 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.709686 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.718649 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.722608 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.740741 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.747577 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c4z8\" (UniqueName: \"kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.804903 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.806397 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.809652 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.810884 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.810913 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.811016 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpwth\" (UniqueName: \"kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.825255 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.859764 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.920772 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqkzt\" (UniqueName: \"kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.920878 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpwth\" (UniqueName: \"kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.920928 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.920974 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.921005 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.921031 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.921048 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.921281 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.925583 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.935396 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.939250 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.939766 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.946852 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.987040 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpwth\" (UniqueName: \"kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.014212 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.025880 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.025955 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.025983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026001 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6p42\" (UniqueName: \"kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026033 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026040 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026068 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026120 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026141 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqkzt\" (UniqueName: \"kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.027284 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.028020 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.032260 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.037460 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.051533 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.060878 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.062417 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.070273 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqkzt\" (UniqueName: \"kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.133838 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c68nt\" (UniqueName: \"kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.133927 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.133946 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6p42\" (UniqueName: \"kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.133985 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.134038 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.134086 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.134112 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.134592 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.153488 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.154202 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.167876 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.174871 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.186414 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6p42\" (UniqueName: \"kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.189164 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.199593 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.235907 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.235964 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.235984 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236008 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c68nt\" (UniqueName: \"kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236041 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236069 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236105 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhvw\" (UniqueName: \"kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236126 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.292020 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.339947 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.340027 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.340090 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zhvw\" (UniqueName: \"kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.340118 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.340140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.340254 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.341308 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.344625 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c68nt\" (UniqueName: \"kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.345111 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.345301 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.345901 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.357132 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.359112 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.366327 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zhvw\" (UniqueName: \"kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.367328 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.533411 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5brf"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.627540 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.637282 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5brf" event={"ID":"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab","Type":"ContainerStarted","Data":"189025ea97cd8693552bd7a1200886f635ac7ec738c1a8d12f444054eeb7b139"} Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.869466 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.879617 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.014982 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhwjx"] Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.016336 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.019042 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.019275 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.042989 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhwjx"] Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.093324 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:04 crc kubenswrapper[5136]: W0320 07:14:04.098936 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod470e7cfd_fbbb_467e_8115_05cb5654655c.slice/crio-0815fbb3bd31db22d56e0ee37bf687b5d4a3901815c3e15d641f9bfe006afa83 WatchSource:0}: Error finding container 0815fbb3bd31db22d56e0ee37bf687b5d4a3901815c3e15d641f9bfe006afa83: Status 404 returned error can't find the container with id 0815fbb3bd31db22d56e0ee37bf687b5d4a3901815c3e15d641f9bfe006afa83 Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.106530 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.162085 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.162151 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.162280 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.162324 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz6q8\" (UniqueName: \"kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.261974 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.263529 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.263603 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.263680 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.263706 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz6q8\" (UniqueName: \"kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.271725 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.273471 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.276405 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.302166 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz6q8\" (UniqueName: \"kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.332083 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:04 crc kubenswrapper[5136]: W0320 07:14:04.343937 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8e70074_47b9_45a2_8dce_52b29305cdf4.slice/crio-01468c63a42bfe893531410c532c091f281c2446cd45d74c41e17c5a31b66895 WatchSource:0}: Error finding container 01468c63a42bfe893531410c532c091f281c2446cd45d74c41e17c5a31b66895: Status 404 returned error can't find the container with id 01468c63a42bfe893531410c532c091f281c2446cd45d74c41e17c5a31b66895 Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.353307 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.364531 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qkg6\" (UniqueName: \"kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6\") pod \"f034b011-ac81-4ef1-aa8b-39164a6c98ee\" (UID: \"f034b011-ac81-4ef1-aa8b-39164a6c98ee\") " Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.371430 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6" (OuterVolumeSpecName: "kube-api-access-6qkg6") pod "f034b011-ac81-4ef1-aa8b-39164a6c98ee" (UID: "f034b011-ac81-4ef1-aa8b-39164a6c98ee"). InnerVolumeSpecName "kube-api-access-6qkg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.467537 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qkg6\" (UniqueName: \"kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.662230 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5brf" event={"ID":"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab","Type":"ContainerStarted","Data":"d40caae4293d07d1be57a3b0fe6a0c2358da7d3ca34831236dc80ae177c4c105"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.672233 5136 generic.go:334] "Generic (PLEG): container finished" podID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerID="296caa72bdf067401801dcafde6b349a8fa9a120a15acef2e0b624bdeebcf37a" exitCode=0 Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.672282 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" event={"ID":"470e7cfd-fbbb-467e-8115-05cb5654655c","Type":"ContainerDied","Data":"296caa72bdf067401801dcafde6b349a8fa9a120a15acef2e0b624bdeebcf37a"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.672319 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" event={"ID":"470e7cfd-fbbb-467e-8115-05cb5654655c","Type":"ContainerStarted","Data":"0815fbb3bd31db22d56e0ee37bf687b5d4a3901815c3e15d641f9bfe006afa83"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.683746 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-c5brf" podStartSLOduration=2.6837302320000003 podStartE2EDuration="2.683730232s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:04.676720973 +0000 UTC m=+1476.936032124" watchObservedRunningTime="2026-03-20 07:14:04.683730232 +0000 UTC m=+1476.943041373" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.685431 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.685475 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" event={"ID":"f034b011-ac81-4ef1-aa8b-39164a6c98ee","Type":"ContainerDied","Data":"39b50e4e3124f6d2ab50323d1ca1ded11a637bf1992d62e4284a2707b5108ecc"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.685500 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39b50e4e3124f6d2ab50323d1ca1ded11a637bf1992d62e4284a2707b5108ecc" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.693700 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerStarted","Data":"20bf0d47cfdd7ced2b7aa5c95e4f2ffab3ec012eb224a73fbcf3f8530975a2a3"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.703251 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8","Type":"ContainerStarted","Data":"0df1814e7262e95838c178b1e5b10663c640c5fde6adccd96881a36676604504"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.713050 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8e70074-47b9-45a2-8dce-52b29305cdf4","Type":"ContainerStarted","Data":"01468c63a42bfe893531410c532c091f281c2446cd45d74c41e17c5a31b66895"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.728165 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerStarted","Data":"a5ca31eef173d869233bc7c356cbb5c4a26b6f89cb34f063ea18470c4f07ae64"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.869166 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhwjx"] Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.356645 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566508-v874c"] Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.377473 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566508-v874c"] Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.742031 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" event={"ID":"470e7cfd-fbbb-467e-8115-05cb5654655c","Type":"ContainerStarted","Data":"a7d15dbcf3e44927ae943561f1932c75895f70aa4d9c499b6616ad45bc104764"} Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.742124 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.745455 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" event={"ID":"3e4cd633-e391-4daa-8d31-f9e05afb5fe9","Type":"ContainerStarted","Data":"0b0d8b9ccdc0ce4c2fbd89f7f74e2b08044fdede201e9fe4c0352ac82e9375a6"} Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.745496 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" event={"ID":"3e4cd633-e391-4daa-8d31-f9e05afb5fe9","Type":"ContainerStarted","Data":"be299a09fe4b2eaa315d545ec6ad6716116fac41b9ad64f156a3aad3be4b468f"} Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.765499 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" podStartSLOduration=3.765477875 podStartE2EDuration="3.765477875s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:05.756396712 +0000 UTC m=+1478.015707863" watchObservedRunningTime="2026-03-20 07:14:05.765477875 +0000 UTC m=+1478.024789036" Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.782636 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" podStartSLOduration=2.782617419 podStartE2EDuration="2.782617419s" podCreationTimestamp="2026-03-20 07:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:05.773565977 +0000 UTC m=+1478.032877128" watchObservedRunningTime="2026-03-20 07:14:05.782617419 +0000 UTC m=+1478.041928570" Mar 20 07:14:06 crc kubenswrapper[5136]: I0320 07:14:06.411946 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9cf4346-e624-476e-b04c-43b35e0a83cd" path="/var/lib/kubelet/pods/f9cf4346-e624-476e-b04c-43b35e0a83cd/volumes" Mar 20 07:14:06 crc kubenswrapper[5136]: I0320 07:14:06.803622 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:06 crc kubenswrapper[5136]: I0320 07:14:06.838330 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.771664 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerStarted","Data":"5b986819d9a90e1b85c9700743fca946f3bc072f13b32ee80b0ccb0986a0c382"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.772099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerStarted","Data":"57f37d9f2fcaf7ecb1593abeb0dac4f77898bf18e2e7d2992aaecaca2cb60ac9"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.774323 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerStarted","Data":"f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.774356 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerStarted","Data":"27c023c35669b2ba848d9e65c7d0898f10c49020818f9d3f19dccfc3afaa8e46"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.774452 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-log" containerID="cri-o://27c023c35669b2ba848d9e65c7d0898f10c49020818f9d3f19dccfc3afaa8e46" gracePeriod=30 Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.774661 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-metadata" containerID="cri-o://f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02" gracePeriod=30 Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.777439 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8e70074-47b9-45a2-8dce-52b29305cdf4","Type":"ContainerStarted","Data":"043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.777507 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f8e70074-47b9-45a2-8dce-52b29305cdf4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926" gracePeriod=30 Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.783045 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8","Type":"ContainerStarted","Data":"5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.873123 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.86717633 podStartE2EDuration="5.873105455s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="2026-03-20 07:14:04.1033633 +0000 UTC m=+1476.362674451" lastFinishedPulling="2026-03-20 07:14:07.109292425 +0000 UTC m=+1479.368603576" observedRunningTime="2026-03-20 07:14:07.819079234 +0000 UTC m=+1480.078390405" watchObservedRunningTime="2026-03-20 07:14:07.873105455 +0000 UTC m=+1480.132416606" Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.877157 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.69075244 podStartE2EDuration="5.877146271s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="2026-03-20 07:14:03.890523797 +0000 UTC m=+1476.149834948" lastFinishedPulling="2026-03-20 07:14:07.076917628 +0000 UTC m=+1479.336228779" observedRunningTime="2026-03-20 07:14:07.871916928 +0000 UTC m=+1480.131228079" watchObservedRunningTime="2026-03-20 07:14:07.877146271 +0000 UTC m=+1480.136457422" Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.904633 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.17566408 podStartE2EDuration="5.904610925s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="2026-03-20 07:14:04.347905061 +0000 UTC m=+1476.607216212" lastFinishedPulling="2026-03-20 07:14:07.076851906 +0000 UTC m=+1479.336163057" observedRunningTime="2026-03-20 07:14:07.904325406 +0000 UTC m=+1480.163636557" watchObservedRunningTime="2026-03-20 07:14:07.904610925 +0000 UTC m=+1480.163922076" Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.958773 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.773739333 podStartE2EDuration="5.95873285s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="2026-03-20 07:14:03.890238328 +0000 UTC m=+1476.149549479" lastFinishedPulling="2026-03-20 07:14:07.075231845 +0000 UTC m=+1479.334542996" observedRunningTime="2026-03-20 07:14:07.939285065 +0000 UTC m=+1480.198596216" watchObservedRunningTime="2026-03-20 07:14:07.95873285 +0000 UTC m=+1480.218044011" Mar 20 07:14:08 crc kubenswrapper[5136]: I0320 07:14:08.015926 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 07:14:08 crc kubenswrapper[5136]: I0320 07:14:08.632188 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:08 crc kubenswrapper[5136]: I0320 07:14:08.795099 5136 generic.go:334] "Generic (PLEG): container finished" podID="0898ed98-4947-4790-9e86-f022b20bc330" containerID="27c023c35669b2ba848d9e65c7d0898f10c49020818f9d3f19dccfc3afaa8e46" exitCode=143 Mar 20 07:14:08 crc kubenswrapper[5136]: I0320 07:14:08.795171 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerDied","Data":"27c023c35669b2ba848d9e65c7d0898f10c49020818f9d3f19dccfc3afaa8e46"} Mar 20 07:14:11 crc kubenswrapper[5136]: I0320 07:14:11.840587 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" containerID="d40caae4293d07d1be57a3b0fe6a0c2358da7d3ca34831236dc80ae177c4c105" exitCode=0 Mar 20 07:14:11 crc kubenswrapper[5136]: I0320 07:14:11.841194 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5brf" event={"ID":"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab","Type":"ContainerDied","Data":"d40caae4293d07d1be57a3b0fe6a0c2358da7d3ca34831236dc80ae177c4c105"} Mar 20 07:14:12 crc kubenswrapper[5136]: I0320 07:14:12.851009 5136 generic.go:334] "Generic (PLEG): container finished" podID="3e4cd633-e391-4daa-8d31-f9e05afb5fe9" containerID="0b0d8b9ccdc0ce4c2fbd89f7f74e2b08044fdede201e9fe4c0352ac82e9375a6" exitCode=0 Mar 20 07:14:12 crc kubenswrapper[5136]: I0320 07:14:12.851123 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" event={"ID":"3e4cd633-e391-4daa-8d31-f9e05afb5fe9","Type":"ContainerDied","Data":"0b0d8b9ccdc0ce4c2fbd89f7f74e2b08044fdede201e9fe4c0352ac82e9375a6"} Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.015669 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.060326 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.200370 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.200441 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.210899 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.259608 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts\") pod \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.259733 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle\") pod \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.259872 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data\") pod \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.259920 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c4z8\" (UniqueName: \"kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8\") pod \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.267245 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts" (OuterVolumeSpecName: "scripts") pod "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" (UID: "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.271511 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8" (OuterVolumeSpecName: "kube-api-access-2c4z8") pod "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" (UID: "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab"). InnerVolumeSpecName "kube-api-access-2c4z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.292086 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data" (OuterVolumeSpecName: "config-data") pod "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" (UID: "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.292340 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" (UID: "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.361973 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.362015 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.362030 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.362042 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c4z8\" (UniqueName: \"kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.370961 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.440083 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.440337 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="dnsmasq-dns" containerID="cri-o://01aa356b57e965220f79e7a24da86937ea014054be6bb673baf18c8bb2471582" gracePeriod=10 Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.859578 5136 generic.go:334] "Generic (PLEG): container finished" podID="200895ec-fcf9-436d-82d3-c26c198e1485" containerID="01aa356b57e965220f79e7a24da86937ea014054be6bb673baf18c8bb2471582" exitCode=0 Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.859627 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" event={"ID":"200895ec-fcf9-436d-82d3-c26c198e1485","Type":"ContainerDied","Data":"01aa356b57e965220f79e7a24da86937ea014054be6bb673baf18c8bb2471582"} Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.859650 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" event={"ID":"200895ec-fcf9-436d-82d3-c26c198e1485","Type":"ContainerDied","Data":"1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d"} Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.859661 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.861787 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.861932 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5brf" event={"ID":"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab","Type":"ContainerDied","Data":"189025ea97cd8693552bd7a1200886f635ac7ec738c1a8d12f444054eeb7b139"} Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.861959 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189025ea97cd8693552bd7a1200886f635ac7ec738c1a8d12f444054eeb7b139" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.871007 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.938299 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972192 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972238 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972353 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972380 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972465 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972512 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq6zh\" (UniqueName: \"kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.981983 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh" (OuterVolumeSpecName: "kube-api-access-cq6zh") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "kube-api-access-cq6zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.033132 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.053844 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.073903 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.074402 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq6zh\" (UniqueName: \"kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.074428 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.074437 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.074448 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.095416 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.095889 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-log" containerID="cri-o://57f37d9f2fcaf7ecb1593abeb0dac4f77898bf18e2e7d2992aaecaca2cb60ac9" gracePeriod=30 Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.097217 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-api" containerID="cri-o://5b986819d9a90e1b85c9700743fca946f3bc072f13b32ee80b0ccb0986a0c382" gracePeriod=30 Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.099592 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.115033 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.115183 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.136210 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config" (OuterVolumeSpecName: "config") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.177841 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.177873 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.321083 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.380233 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data\") pod \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.380347 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts\") pod \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.380370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz6q8\" (UniqueName: \"kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8\") pod \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.380432 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle\") pod \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.383802 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.385195 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8" (OuterVolumeSpecName: "kube-api-access-nz6q8") pod "3e4cd633-e391-4daa-8d31-f9e05afb5fe9" (UID: "3e4cd633-e391-4daa-8d31-f9e05afb5fe9"). InnerVolumeSpecName "kube-api-access-nz6q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.413958 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts" (OuterVolumeSpecName: "scripts") pod "3e4cd633-e391-4daa-8d31-f9e05afb5fe9" (UID: "3e4cd633-e391-4daa-8d31-f9e05afb5fe9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.464292 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data" (OuterVolumeSpecName: "config-data") pod "3e4cd633-e391-4daa-8d31-f9e05afb5fe9" (UID: "3e4cd633-e391-4daa-8d31-f9e05afb5fe9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.470071 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e4cd633-e391-4daa-8d31-f9e05afb5fe9" (UID: "3e4cd633-e391-4daa-8d31-f9e05afb5fe9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.482160 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.482187 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.482198 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz6q8\" (UniqueName: \"kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.482207 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.870262 5136 generic.go:334] "Generic (PLEG): container finished" podID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerID="57f37d9f2fcaf7ecb1593abeb0dac4f77898bf18e2e7d2992aaecaca2cb60ac9" exitCode=143 Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.870335 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerDied","Data":"57f37d9f2fcaf7ecb1593abeb0dac4f77898bf18e2e7d2992aaecaca2cb60ac9"} Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.872140 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.872130 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" event={"ID":"3e4cd633-e391-4daa-8d31-f9e05afb5fe9","Type":"ContainerDied","Data":"be299a09fe4b2eaa315d545ec6ad6716116fac41b9ad64f156a3aad3be4b468f"} Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.872194 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be299a09fe4b2eaa315d545ec6ad6716116fac41b9ad64f156a3aad3be4b468f" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.872311 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.912810 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.920958 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.967895 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:14:14 crc kubenswrapper[5136]: E0320 07:14:14.968259 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" containerName="nova-manage" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968275 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" containerName="nova-manage" Mar 20 07:14:14 crc kubenswrapper[5136]: E0320 07:14:14.968288 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4cd633-e391-4daa-8d31-f9e05afb5fe9" containerName="nova-cell1-conductor-db-sync" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968294 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4cd633-e391-4daa-8d31-f9e05afb5fe9" containerName="nova-cell1-conductor-db-sync" Mar 20 07:14:14 crc kubenswrapper[5136]: E0320 07:14:14.968314 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="init" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968321 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="init" Mar 20 07:14:14 crc kubenswrapper[5136]: E0320 07:14:14.968337 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="dnsmasq-dns" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968342 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="dnsmasq-dns" Mar 20 07:14:14 crc kubenswrapper[5136]: E0320 07:14:14.968351 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f034b011-ac81-4ef1-aa8b-39164a6c98ee" containerName="oc" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968357 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f034b011-ac81-4ef1-aa8b-39164a6c98ee" containerName="oc" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968516 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f034b011-ac81-4ef1-aa8b-39164a6c98ee" containerName="oc" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968533 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="dnsmasq-dns" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968543 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" containerName="nova-manage" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968552 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4cd633-e391-4daa-8d31-f9e05afb5fe9" containerName="nova-cell1-conductor-db-sync" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.969129 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.977079 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.977320 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.096765 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.096841 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.096936 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjvz\" (UniqueName: \"kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.197947 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.198011 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.198107 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzjvz\" (UniqueName: \"kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.202330 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.202534 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.221916 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzjvz\" (UniqueName: \"kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.283426 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.750115 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.884753 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"52463352-7504-47a4-92e5-d672bab85574","Type":"ContainerStarted","Data":"d25cabad936d4a8da77263639f37547fcf3ffbbafde65e2d7285a8e382e5513c"} Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.884921 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerName="nova-scheduler-scheduler" containerID="cri-o://5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" gracePeriod=30 Mar 20 07:14:16 crc kubenswrapper[5136]: I0320 07:14:16.407854 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" path="/var/lib/kubelet/pods/200895ec-fcf9-436d-82d3-c26c198e1485/volumes" Mar 20 07:14:16 crc kubenswrapper[5136]: I0320 07:14:16.894969 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"52463352-7504-47a4-92e5-d672bab85574","Type":"ContainerStarted","Data":"f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9"} Mar 20 07:14:16 crc kubenswrapper[5136]: I0320 07:14:16.895152 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:16 crc kubenswrapper[5136]: I0320 07:14:16.917919 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.91789646 podStartE2EDuration="2.91789646s" podCreationTimestamp="2026-03-20 07:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:16.916238958 +0000 UTC m=+1489.175550119" watchObservedRunningTime="2026-03-20 07:14:16.91789646 +0000 UTC m=+1489.177207631" Mar 20 07:14:17 crc kubenswrapper[5136]: I0320 07:14:17.802220 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 20 07:14:18 crc kubenswrapper[5136]: E0320 07:14:18.016507 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:14:18 crc kubenswrapper[5136]: E0320 07:14:18.017779 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:14:18 crc kubenswrapper[5136]: E0320 07:14:18.018910 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:14:18 crc kubenswrapper[5136]: E0320 07:14:18.018968 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerName="nova-scheduler-scheduler" Mar 20 07:14:18 crc kubenswrapper[5136]: E0320 07:14:18.875626 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64c2e510_4c83_41bf_ae4a_e8cc1dc058f8.slice/crio-5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b.scope\": RecentStats: unable to find data in memory cache]" Mar 20 07:14:18 crc kubenswrapper[5136]: I0320 07:14:18.915383 5136 generic.go:334] "Generic (PLEG): container finished" podID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerID="5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" exitCode=0 Mar 20 07:14:18 crc kubenswrapper[5136]: I0320 07:14:18.915411 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8","Type":"ContainerDied","Data":"5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b"} Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.192738 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.371246 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle\") pod \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.371360 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data\") pod \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.371401 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpwth\" (UniqueName: \"kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth\") pod \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.389142 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth" (OuterVolumeSpecName: "kube-api-access-xpwth") pod "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" (UID: "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8"). InnerVolumeSpecName "kube-api-access-xpwth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.402961 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" (UID: "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.409018 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data" (OuterVolumeSpecName: "config-data") pod "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" (UID: "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.474152 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.474204 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.474226 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpwth\" (UniqueName: \"kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.926107 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8","Type":"ContainerDied","Data":"0df1814e7262e95838c178b1e5b10663c640c5fde6adccd96881a36676604504"} Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.926361 5136 scope.go:117] "RemoveContainer" containerID="5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.926159 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.932034 5136 generic.go:334] "Generic (PLEG): container finished" podID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerID="5b986819d9a90e1b85c9700743fca946f3bc072f13b32ee80b0ccb0986a0c382" exitCode=0 Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.932069 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerDied","Data":"5b986819d9a90e1b85c9700743fca946f3bc072f13b32ee80b0ccb0986a0c382"} Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.932089 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerDied","Data":"a5ca31eef173d869233bc7c356cbb5c4a26b6f89cb34f063ea18470c4f07ae64"} Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.932099 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ca31eef173d869233bc7c356cbb5c4a26b6f89cb34f063ea18470c4f07ae64" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.933291 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.982359 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.002112 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.019349 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: E0320 07:14:20.019883 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-api" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.019905 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-api" Mar 20 07:14:20 crc kubenswrapper[5136]: E0320 07:14:20.019921 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerName="nova-scheduler-scheduler" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.019929 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerName="nova-scheduler-scheduler" Mar 20 07:14:20 crc kubenswrapper[5136]: E0320 07:14:20.019967 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-log" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.019978 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-log" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.020186 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-log" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.020214 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-api" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.020243 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerName="nova-scheduler-scheduler" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.021368 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.023672 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.039460 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.085387 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs\") pod \"4e08e031-e23d-44c4-bbb2-039769dc1e24\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.085507 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle\") pod \"4e08e031-e23d-44c4-bbb2-039769dc1e24\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.085539 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6p42\" (UniqueName: \"kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42\") pod \"4e08e031-e23d-44c4-bbb2-039769dc1e24\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.085595 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data\") pod \"4e08e031-e23d-44c4-bbb2-039769dc1e24\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.086678 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs" (OuterVolumeSpecName: "logs") pod "4e08e031-e23d-44c4-bbb2-039769dc1e24" (UID: "4e08e031-e23d-44c4-bbb2-039769dc1e24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.102335 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42" (OuterVolumeSpecName: "kube-api-access-k6p42") pod "4e08e031-e23d-44c4-bbb2-039769dc1e24" (UID: "4e08e031-e23d-44c4-bbb2-039769dc1e24"). InnerVolumeSpecName "kube-api-access-k6p42". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.123213 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data" (OuterVolumeSpecName: "config-data") pod "4e08e031-e23d-44c4-bbb2-039769dc1e24" (UID: "4e08e031-e23d-44c4-bbb2-039769dc1e24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.139971 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e08e031-e23d-44c4-bbb2-039769dc1e24" (UID: "4e08e031-e23d-44c4-bbb2-039769dc1e24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188725 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shtw7\" (UniqueName: \"kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188808 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188900 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188912 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6p42\" (UniqueName: \"kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188921 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188929 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.290064 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.290758 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.290789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shtw7\" (UniqueName: \"kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.294123 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.294575 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.307041 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shtw7\" (UniqueName: \"kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.340435 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.406938 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" path="/var/lib/kubelet/pods/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8/volumes" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.819355 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.948172 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a52f0c9-0dde-48d7-83a3-bb05b1217295","Type":"ContainerStarted","Data":"746593fd75baa00dce61da5dddc1f1d308565b21d07137d5d833134ea9410d34"} Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.948205 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.968639 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.975650 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.987146 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.988697 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.998160 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.020423 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.102715 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.102833 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh8c2\" (UniqueName: \"kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.102904 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.103012 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.175774 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.175844 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.204498 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.204942 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.205080 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh8c2\" (UniqueName: \"kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.205530 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.206247 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.210041 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.210131 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.226733 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh8c2\" (UniqueName: \"kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.318064 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.560466 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.561087 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cf624d46-ce35-4e7f-b463-4b0eba006ded" containerName="kube-state-metrics" containerID="cri-o://0a42176f6839fd2b1fa46f8a90c2d73b4c4eaa11385cb9c81bf9e24e01ecf323" gracePeriod=30 Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.815165 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.972421 5136 generic.go:334] "Generic (PLEG): container finished" podID="cf624d46-ce35-4e7f-b463-4b0eba006ded" containerID="0a42176f6839fd2b1fa46f8a90c2d73b4c4eaa11385cb9c81bf9e24e01ecf323" exitCode=2 Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.972479 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cf624d46-ce35-4e7f-b463-4b0eba006ded","Type":"ContainerDied","Data":"0a42176f6839fd2b1fa46f8a90c2d73b4c4eaa11385cb9c81bf9e24e01ecf323"} Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.972847 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cf624d46-ce35-4e7f-b463-4b0eba006ded","Type":"ContainerDied","Data":"344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a"} Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.972869 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.976645 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a52f0c9-0dde-48d7-83a3-bb05b1217295","Type":"ContainerStarted","Data":"2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528"} Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.991536 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerStarted","Data":"3ee4cb0a8e431ba536bfd981a33fd323677b0e4a660774422b45fc0cf650bc2a"} Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.012248 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.012228916 podStartE2EDuration="3.012228916s" podCreationTimestamp="2026-03-20 07:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:22.000406047 +0000 UTC m=+1494.259717218" watchObservedRunningTime="2026-03-20 07:14:22.012228916 +0000 UTC m=+1494.271540067" Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.020809 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.120873 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-478m9\" (UniqueName: \"kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9\") pod \"cf624d46-ce35-4e7f-b463-4b0eba006ded\" (UID: \"cf624d46-ce35-4e7f-b463-4b0eba006ded\") " Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.134939 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9" (OuterVolumeSpecName: "kube-api-access-478m9") pod "cf624d46-ce35-4e7f-b463-4b0eba006ded" (UID: "cf624d46-ce35-4e7f-b463-4b0eba006ded"). InnerVolumeSpecName "kube-api-access-478m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.225740 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-478m9\" (UniqueName: \"kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.407104 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" path="/var/lib/kubelet/pods/4e08e031-e23d-44c4-bbb2-039769dc1e24/volumes" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.002128 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerStarted","Data":"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b"} Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.002187 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerStarted","Data":"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077"} Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.002287 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.027046 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.027030786 podStartE2EDuration="3.027030786s" podCreationTimestamp="2026-03-20 07:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:23.017637034 +0000 UTC m=+1495.276948205" watchObservedRunningTime="2026-03-20 07:14:23.027030786 +0000 UTC m=+1495.286341937" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.040908 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.056875 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.065540 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:23 crc kubenswrapper[5136]: E0320 07:14:23.066004 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf624d46-ce35-4e7f-b463-4b0eba006ded" containerName="kube-state-metrics" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.066022 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf624d46-ce35-4e7f-b463-4b0eba006ded" containerName="kube-state-metrics" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.066236 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf624d46-ce35-4e7f-b463-4b0eba006ded" containerName="kube-state-metrics" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.067002 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.069439 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.070924 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.075731 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.240895 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf4lm\" (UniqueName: \"kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.240970 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.241008 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.241066 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.246601 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.246915 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-central-agent" containerID="cri-o://5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b" gracePeriod=30 Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.247046 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-notification-agent" containerID="cri-o://164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0" gracePeriod=30 Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.247120 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="proxy-httpd" containerID="cri-o://b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614" gracePeriod=30 Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.247152 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="sg-core" containerID="cri-o://066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff" gracePeriod=30 Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.342938 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.343034 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.343124 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.343208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf4lm\" (UniqueName: \"kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.347678 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.348313 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.360533 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.361068 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf4lm\" (UniqueName: \"kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.385956 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.862834 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015452 5136 generic.go:334] "Generic (PLEG): container finished" podID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerID="b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614" exitCode=0 Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015486 5136 generic.go:334] "Generic (PLEG): container finished" podID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerID="066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff" exitCode=2 Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015499 5136 generic.go:334] "Generic (PLEG): container finished" podID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerID="5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b" exitCode=0 Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015522 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerDied","Data":"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614"} Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015566 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerDied","Data":"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff"} Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015580 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerDied","Data":"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b"} Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.017421 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c17493c5-d958-46ab-8e02-d190b2fa6944","Type":"ContainerStarted","Data":"634be8a4401daf1087438b6bbc45263ddd43a3a043a4d0fdc9026fb30fdc45cf"} Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.410470 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf624d46-ce35-4e7f-b463-4b0eba006ded" path="/var/lib/kubelet/pods/cf624d46-ce35-4e7f-b463-4b0eba006ded/volumes" Mar 20 07:14:25 crc kubenswrapper[5136]: I0320 07:14:25.333058 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:25 crc kubenswrapper[5136]: I0320 07:14:25.341978 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 07:14:26 crc kubenswrapper[5136]: I0320 07:14:26.039370 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c17493c5-d958-46ab-8e02-d190b2fa6944","Type":"ContainerStarted","Data":"4af2afde3b60e503cf744acf4fb08477b7ec46cb1b30cfb589608690a2df8849"} Mar 20 07:14:26 crc kubenswrapper[5136]: I0320 07:14:26.039918 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 20 07:14:26 crc kubenswrapper[5136]: I0320 07:14:26.058099 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.719506257 podStartE2EDuration="3.058073043s" podCreationTimestamp="2026-03-20 07:14:23 +0000 UTC" firstStartedPulling="2026-03-20 07:14:23.867280945 +0000 UTC m=+1496.126592096" lastFinishedPulling="2026-03-20 07:14:25.205847731 +0000 UTC m=+1497.465158882" observedRunningTime="2026-03-20 07:14:26.05639981 +0000 UTC m=+1498.315710971" watchObservedRunningTime="2026-03-20 07:14:26.058073043 +0000 UTC m=+1498.317384224" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.629108 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754315 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754477 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754495 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754568 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59dlm\" (UniqueName: \"kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754587 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754605 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754657 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754973 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.755142 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.761031 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm" (OuterVolumeSpecName: "kube-api-access-59dlm") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "kube-api-access-59dlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.775305 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts" (OuterVolumeSpecName: "scripts") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.789045 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.851020 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856357 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856471 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856533 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59dlm\" (UniqueName: \"kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856598 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856660 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856722 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.857672 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data" (OuterVolumeSpecName: "config-data") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.958201 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.071473 5136 generic.go:334] "Generic (PLEG): container finished" podID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerID="164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0" exitCode=0 Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.071544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerDied","Data":"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0"} Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.072124 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerDied","Data":"27ead29dd635d8c64178622abcb2bb11b72e1cbf565e64de2bb239a0882424b4"} Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.071618 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.072173 5136 scope.go:117] "RemoveContainer" containerID="b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.102775 5136 scope.go:117] "RemoveContainer" containerID="066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.113558 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.131952 5136 scope.go:117] "RemoveContainer" containerID="164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.146310 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.155074 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.155721 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="proxy-httpd" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.155837 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="proxy-httpd" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.155966 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-central-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.156147 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-central-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.156250 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-notification-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.156325 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-notification-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.156415 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="sg-core" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.156510 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="sg-core" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.156902 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="sg-core" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.157046 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-notification-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.157130 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="proxy-httpd" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.157214 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-central-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.156283 5136 scope.go:117] "RemoveContainer" containerID="5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.159535 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.162210 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.162607 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.162728 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.164250 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.187881 5136 scope.go:117] "RemoveContainer" containerID="b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.188376 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614\": container with ID starting with b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614 not found: ID does not exist" containerID="b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.188418 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614"} err="failed to get container status \"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614\": rpc error: code = NotFound desc = could not find container \"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614\": container with ID starting with b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614 not found: ID does not exist" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.188446 5136 scope.go:117] "RemoveContainer" containerID="066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.188876 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff\": container with ID starting with 066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff not found: ID does not exist" containerID="066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.188949 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff"} err="failed to get container status \"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff\": rpc error: code = NotFound desc = could not find container \"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff\": container with ID starting with 066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff not found: ID does not exist" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.188977 5136 scope.go:117] "RemoveContainer" containerID="164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.189302 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0\": container with ID starting with 164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0 not found: ID does not exist" containerID="164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.189432 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0"} err="failed to get container status \"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0\": rpc error: code = NotFound desc = could not find container \"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0\": container with ID starting with 164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0 not found: ID does not exist" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.189637 5136 scope.go:117] "RemoveContainer" containerID="5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.190039 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b\": container with ID starting with 5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b not found: ID does not exist" containerID="5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.190071 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b"} err="failed to get container status \"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b\": rpc error: code = NotFound desc = could not find container \"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b\": container with ID starting with 5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b not found: ID does not exist" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.263912 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264229 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sz9j\" (UniqueName: \"kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264375 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264504 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264606 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264775 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264883 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.367704 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sz9j\" (UniqueName: \"kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.367771 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.367839 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.367890 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.368916 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.368976 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.369027 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.369111 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.369685 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.370401 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.372020 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.374336 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.374527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.375034 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.383463 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.387301 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sz9j\" (UniqueName: \"kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.480539 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.916857 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:30 crc kubenswrapper[5136]: I0320 07:14:30.082671 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerStarted","Data":"c7959dfd3e0a4c0a66b004e981792b62f5688704717664c451678069db344ce1"} Mar 20 07:14:30 crc kubenswrapper[5136]: I0320 07:14:30.130165 5136 scope.go:117] "RemoveContainer" containerID="dd0acbcfa54abd26f2307cdf5e361341926b1d9e084af4898d114e06b8c54d72" Mar 20 07:14:30 crc kubenswrapper[5136]: I0320 07:14:30.341517 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 07:14:30 crc kubenswrapper[5136]: I0320 07:14:30.367151 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 07:14:30 crc kubenswrapper[5136]: I0320 07:14:30.417649 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" path="/var/lib/kubelet/pods/0b351b6a-5365-40cf-9d42-c6d4df7cc48b/volumes" Mar 20 07:14:31 crc kubenswrapper[5136]: I0320 07:14:31.095403 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerStarted","Data":"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104"} Mar 20 07:14:31 crc kubenswrapper[5136]: I0320 07:14:31.125967 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 07:14:31 crc kubenswrapper[5136]: I0320 07:14:31.319472 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:14:31 crc kubenswrapper[5136]: I0320 07:14:31.319515 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:14:32 crc kubenswrapper[5136]: I0320 07:14:32.105996 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerStarted","Data":"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b"} Mar 20 07:14:32 crc kubenswrapper[5136]: I0320 07:14:32.402004 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 07:14:32 crc kubenswrapper[5136]: I0320 07:14:32.402054 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 07:14:33 crc kubenswrapper[5136]: I0320 07:14:33.116768 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerStarted","Data":"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7"} Mar 20 07:14:33 crc kubenswrapper[5136]: I0320 07:14:33.401773 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 20 07:14:35 crc kubenswrapper[5136]: I0320 07:14:35.136398 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerStarted","Data":"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959"} Mar 20 07:14:35 crc kubenswrapper[5136]: I0320 07:14:35.137199 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:14:38 crc kubenswrapper[5136]: E0320 07:14:38.022561 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0898ed98_4947_4790_9e86_f022b20bc330.slice/crio-f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0898ed98_4947_4790_9e86_f022b20bc330.slice/crio-conmon-f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8e70074_47b9_45a2_8dce_52b29305cdf4.slice/crio-conmon-043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926.scope\": RecentStats: unable to find data in memory cache]" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.183618 5136 generic.go:334] "Generic (PLEG): container finished" podID="0898ed98-4947-4790-9e86-f022b20bc330" containerID="f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02" exitCode=137 Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.183965 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerDied","Data":"f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02"} Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.183990 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerDied","Data":"20bf0d47cfdd7ced2b7aa5c95e4f2ffab3ec012eb224a73fbcf3f8530975a2a3"} Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.184000 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20bf0d47cfdd7ced2b7aa5c95e4f2ffab3ec012eb224a73fbcf3f8530975a2a3" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.185357 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.376979876 podStartE2EDuration="9.185345854s" podCreationTimestamp="2026-03-20 07:14:29 +0000 UTC" firstStartedPulling="2026-03-20 07:14:29.926213099 +0000 UTC m=+1502.185524240" lastFinishedPulling="2026-03-20 07:14:34.734579057 +0000 UTC m=+1506.993890218" observedRunningTime="2026-03-20 07:14:35.157754595 +0000 UTC m=+1507.417065756" watchObservedRunningTime="2026-03-20 07:14:38.185345854 +0000 UTC m=+1510.444657005" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.185901 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.196871 5136 generic.go:334] "Generic (PLEG): container finished" podID="f8e70074-47b9-45a2-8dce-52b29305cdf4" containerID="043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926" exitCode=137 Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.196907 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8e70074-47b9-45a2-8dce-52b29305cdf4","Type":"ContainerDied","Data":"043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926"} Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.198046 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.187713 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.253241 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.338699 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs\") pod \"0898ed98-4947-4790-9e86-f022b20bc330\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.338895 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle\") pod \"0898ed98-4947-4790-9e86-f022b20bc330\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.338967 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqkzt\" (UniqueName: \"kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt\") pod \"0898ed98-4947-4790-9e86-f022b20bc330\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.338996 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data\") pod \"0898ed98-4947-4790-9e86-f022b20bc330\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.339289 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.339314 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs" (OuterVolumeSpecName: "logs") pod "0898ed98-4947-4790-9e86-f022b20bc330" (UID: "0898ed98-4947-4790-9e86-f022b20bc330"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.339537 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.339845 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pm5\" (UniqueName: \"kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.339920 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.344773 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt" (OuterVolumeSpecName: "kube-api-access-gqkzt") pod "0898ed98-4947-4790-9e86-f022b20bc330" (UID: "0898ed98-4947-4790-9e86-f022b20bc330"). InnerVolumeSpecName "kube-api-access-gqkzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.349380 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.379784 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data" (OuterVolumeSpecName: "config-data") pod "0898ed98-4947-4790-9e86-f022b20bc330" (UID: "0898ed98-4947-4790-9e86-f022b20bc330"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.399109 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0898ed98-4947-4790-9e86-f022b20bc330" (UID: "0898ed98-4947-4790-9e86-f022b20bc330"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441098 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data\") pod \"f8e70074-47b9-45a2-8dce-52b29305cdf4\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441251 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c68nt\" (UniqueName: \"kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt\") pod \"f8e70074-47b9-45a2-8dce-52b29305cdf4\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441415 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle\") pod \"f8e70074-47b9-45a2-8dce-52b29305cdf4\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441661 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441787 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5pm5\" (UniqueName: \"kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441863 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441977 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441998 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqkzt\" (UniqueName: \"kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.442012 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.442485 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.442993 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.457627 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt" (OuterVolumeSpecName: "kube-api-access-c68nt") pod "f8e70074-47b9-45a2-8dce-52b29305cdf4" (UID: "f8e70074-47b9-45a2-8dce-52b29305cdf4"). InnerVolumeSpecName "kube-api-access-c68nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.460742 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5pm5\" (UniqueName: \"kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.466993 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data" (OuterVolumeSpecName: "config-data") pod "f8e70074-47b9-45a2-8dce-52b29305cdf4" (UID: "f8e70074-47b9-45a2-8dce-52b29305cdf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.470913 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8e70074-47b9-45a2-8dce-52b29305cdf4" (UID: "f8e70074-47b9-45a2-8dce-52b29305cdf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.544194 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c68nt\" (UniqueName: \"kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.544270 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.544283 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.560549 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.045340 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.208667 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerStarted","Data":"14fe667f3db588129ef772a0e1c0daeddde9622aa1ffb9941f75f06fb9c1984f"} Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.212702 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.212785 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8e70074-47b9-45a2-8dce-52b29305cdf4","Type":"ContainerDied","Data":"01468c63a42bfe893531410c532c091f281c2446cd45d74c41e17c5a31b66895"} Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.212853 5136 scope.go:117] "RemoveContainer" containerID="043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.213358 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.245384 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.268545 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.291598 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: E0320 07:14:39.292054 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-metadata" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292071 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-metadata" Mar 20 07:14:39 crc kubenswrapper[5136]: E0320 07:14:39.292090 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-log" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292096 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-log" Mar 20 07:14:39 crc kubenswrapper[5136]: E0320 07:14:39.292128 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e70074-47b9-45a2-8dce-52b29305cdf4" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292134 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e70074-47b9-45a2-8dce-52b29305cdf4" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292290 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-log" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292313 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e70074-47b9-45a2-8dce-52b29305cdf4" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292324 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-metadata" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.293214 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.300969 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.301153 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.308136 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.319106 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.319377 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.319405 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.328073 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.344000 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.345044 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.351801 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.352047 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.352250 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.364106 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.364145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.364177 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.364272 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwgkt\" (UniqueName: \"kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.364508 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.365393 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467084 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2qt\" (UniqueName: \"kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467167 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467193 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467234 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467275 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467354 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwgkt\" (UniqueName: \"kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467437 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467492 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467585 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467627 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.468273 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.473755 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.474161 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.483052 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwgkt\" (UniqueName: \"kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.488840 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.569209 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld2qt\" (UniqueName: \"kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.569296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.569372 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.569446 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.569498 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.573756 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.576270 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.576405 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.578237 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.593731 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld2qt\" (UniqueName: \"kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.643742 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.670139 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.167434 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:40 crc kubenswrapper[5136]: W0320 07:14:40.169312 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4622969f_2f2e_42d7_81a6_bc6baa386aec.slice/crio-c57256ebbbe3274d2806f8f41571ca74c969c50a2ce02b08771f1ab16c5bdd1d WatchSource:0}: Error finding container c57256ebbbe3274d2806f8f41571ca74c969c50a2ce02b08771f1ab16c5bdd1d: Status 404 returned error can't find the container with id c57256ebbbe3274d2806f8f41571ca74c969c50a2ce02b08771f1ab16c5bdd1d Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.242341 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerID="05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17" exitCode=0 Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.242966 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerDied","Data":"05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17"} Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.248277 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerStarted","Data":"c57256ebbbe3274d2806f8f41571ca74c969c50a2ce02b08771f1ab16c5bdd1d"} Mar 20 07:14:40 crc kubenswrapper[5136]: W0320 07:14:40.250353 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63ab8493_eb78_41d9_b368_bba74dc78166.slice/crio-2da23f701d5f2888a554a42a1e274ecc8ec591c846eec70993fd4a7736e9b816 WatchSource:0}: Error finding container 2da23f701d5f2888a554a42a1e274ecc8ec591c846eec70993fd4a7736e9b816: Status 404 returned error can't find the container with id 2da23f701d5f2888a554a42a1e274ecc8ec591c846eec70993fd4a7736e9b816 Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.251628 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.408123 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0898ed98-4947-4790-9e86-f022b20bc330" path="/var/lib/kubelet/pods/0898ed98-4947-4790-9e86-f022b20bc330/volumes" Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.409114 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e70074-47b9-45a2-8dce-52b29305cdf4" path="/var/lib/kubelet/pods/f8e70074-47b9-45a2-8dce-52b29305cdf4/volumes" Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.262232 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerStarted","Data":"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8"} Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.262559 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerStarted","Data":"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8"} Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.264269 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63ab8493-eb78-41d9-b368-bba74dc78166","Type":"ContainerStarted","Data":"a4685f240b41a0ca80c93510a938eb41d236fed91b12082fd48b7f0a68f41629"} Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.264306 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63ab8493-eb78-41d9-b368-bba74dc78166","Type":"ContainerStarted","Data":"2da23f701d5f2888a554a42a1e274ecc8ec591c846eec70993fd4a7736e9b816"} Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.290299 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.29027075 podStartE2EDuration="2.29027075s" podCreationTimestamp="2026-03-20 07:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:41.285160961 +0000 UTC m=+1513.544472162" watchObservedRunningTime="2026-03-20 07:14:41.29027075 +0000 UTC m=+1513.549581941" Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.324058 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.32402538 podStartE2EDuration="2.32402538s" podCreationTimestamp="2026-03-20 07:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:41.312944646 +0000 UTC m=+1513.572255797" watchObservedRunningTime="2026-03-20 07:14:41.32402538 +0000 UTC m=+1513.583336561" Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.327681 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.330741 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.334467 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.280110 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerStarted","Data":"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab"} Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.290285 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.537431 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.539122 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.549491 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.668808 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.668905 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.668945 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.668983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.669268 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.669401 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfpk\" (UniqueName: \"kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771268 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bfpk\" (UniqueName: \"kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771347 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771385 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771426 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771487 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771540 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.772347 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.772424 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.772458 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.772537 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.772587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.790611 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bfpk\" (UniqueName: \"kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.875060 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:43.289544 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerID="3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab" exitCode=0 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:43.289636 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerDied","Data":"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab"} Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:43.499577 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.302076 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerStarted","Data":"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14"} Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.304010 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerID="cc5a54a6935dd6e523205b586479d84179624ba24df417c663b90589e6d2673f" exitCode=0 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.304058 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" event={"ID":"ccc70cce-242d-4c99-8d3f-ddb541904e29","Type":"ContainerDied","Data":"cc5a54a6935dd6e523205b586479d84179624ba24df417c663b90589e6d2673f"} Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.304103 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" event={"ID":"ccc70cce-242d-4c99-8d3f-ddb541904e29","Type":"ContainerStarted","Data":"b418e83480ddaf25a5b00d4752775ac00973d088cabf1d27a9b6eceb6bb0b062"} Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.341027 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pfj4j" podStartSLOduration=2.882553372 podStartE2EDuration="6.341005459s" podCreationTimestamp="2026-03-20 07:14:38 +0000 UTC" firstStartedPulling="2026-03-20 07:14:40.243516625 +0000 UTC m=+1512.502827776" lastFinishedPulling="2026-03-20 07:14:43.701968712 +0000 UTC m=+1515.961279863" observedRunningTime="2026-03-20 07:14:44.331951877 +0000 UTC m=+1516.591263028" watchObservedRunningTime="2026-03-20 07:14:44.341005459 +0000 UTC m=+1516.600316610" Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.613650 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.614226 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-central-agent" containerID="cri-o://6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" gracePeriod=30 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.614679 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="proxy-httpd" containerID="cri-o://fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" gracePeriod=30 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.614748 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="sg-core" containerID="cri-o://870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" gracePeriod=30 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.614802 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-notification-agent" containerID="cri-o://21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" gracePeriod=30 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.670441 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.162560 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.313973 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" event={"ID":"ccc70cce-242d-4c99-8d3f-ddb541904e29","Type":"ContainerStarted","Data":"ccb4f9c0c6dc989c486c61d0a17af6a9e3438c25ae843380545c453141823051"} Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.315135 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317059 5136 generic.go:334] "Generic (PLEG): container finished" podID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerID="fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" exitCode=0 Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317080 5136 generic.go:334] "Generic (PLEG): container finished" podID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerID="870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" exitCode=2 Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317216 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-log" containerID="cri-o://e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077" gracePeriod=30 Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317297 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-api" containerID="cri-o://bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b" gracePeriod=30 Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317371 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerDied","Data":"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959"} Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317408 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerDied","Data":"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7"} Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.342712 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" podStartSLOduration=3.342692161 podStartE2EDuration="3.342692161s" podCreationTimestamp="2026-03-20 07:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:45.332604238 +0000 UTC m=+1517.591915389" watchObservedRunningTime="2026-03-20 07:14:45.342692161 +0000 UTC m=+1517.602003312" Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.821970 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.822186 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.230964 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327380 5136 generic.go:334] "Generic (PLEG): container finished" podID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerID="21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" exitCode=0 Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327408 5136 generic.go:334] "Generic (PLEG): container finished" podID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerID="6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" exitCode=0 Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327444 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerDied","Data":"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b"} Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327468 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerDied","Data":"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104"} Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327477 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerDied","Data":"c7959dfd3e0a4c0a66b004e981792b62f5688704717664c451678069db344ce1"} Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327491 5136 scope.go:117] "RemoveContainer" containerID="fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327605 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.333314 5136 generic.go:334] "Generic (PLEG): container finished" podID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerID="e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077" exitCode=143 Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.333396 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerDied","Data":"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077"} Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345180 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sz9j\" (UniqueName: \"kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345234 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345314 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345364 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345428 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345461 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345479 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345517 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.347281 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.347389 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.347529 5136 scope.go:117] "RemoveContainer" containerID="870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.353501 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j" (OuterVolumeSpecName: "kube-api-access-4sz9j") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "kube-api-access-4sz9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.354167 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts" (OuterVolumeSpecName: "scripts") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.374616 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.403570 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.424123 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447409 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sz9j\" (UniqueName: \"kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447438 5136 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447448 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447458 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447467 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447475 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447483 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.475257 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data" (OuterVolumeSpecName: "config-data") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.497216 5136 scope.go:117] "RemoveContainer" containerID="21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.518782 5136 scope.go:117] "RemoveContainer" containerID="6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.543339 5136 scope.go:117] "RemoveContainer" containerID="fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.543779 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959\": container with ID starting with fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959 not found: ID does not exist" containerID="fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.543827 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959"} err="failed to get container status \"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959\": rpc error: code = NotFound desc = could not find container \"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959\": container with ID starting with fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.543853 5136 scope.go:117] "RemoveContainer" containerID="870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.544201 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7\": container with ID starting with 870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7 not found: ID does not exist" containerID="870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544226 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7"} err="failed to get container status \"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7\": rpc error: code = NotFound desc = could not find container \"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7\": container with ID starting with 870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544242 5136 scope.go:117] "RemoveContainer" containerID="21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.544500 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b\": container with ID starting with 21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b not found: ID does not exist" containerID="21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544523 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b"} err="failed to get container status \"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b\": rpc error: code = NotFound desc = could not find container \"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b\": container with ID starting with 21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544539 5136 scope.go:117] "RemoveContainer" containerID="6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.544860 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104\": container with ID starting with 6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104 not found: ID does not exist" containerID="6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544881 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104"} err="failed to get container status \"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104\": rpc error: code = NotFound desc = could not find container \"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104\": container with ID starting with 6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544905 5136 scope.go:117] "RemoveContainer" containerID="fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545212 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959"} err="failed to get container status \"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959\": rpc error: code = NotFound desc = could not find container \"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959\": container with ID starting with fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545253 5136 scope.go:117] "RemoveContainer" containerID="870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545517 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7"} err="failed to get container status \"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7\": rpc error: code = NotFound desc = could not find container \"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7\": container with ID starting with 870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545545 5136 scope.go:117] "RemoveContainer" containerID="21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545764 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b"} err="failed to get container status \"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b\": rpc error: code = NotFound desc = could not find container \"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b\": container with ID starting with 21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545781 5136 scope.go:117] "RemoveContainer" containerID="6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.546020 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104"} err="failed to get container status \"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104\": rpc error: code = NotFound desc = could not find container \"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104\": container with ID starting with 6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.549422 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.654756 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.666698 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.678743 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.679162 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="proxy-httpd" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679181 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="proxy-httpd" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.679194 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-central-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679203 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-central-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.679224 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="sg-core" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679232 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="sg-core" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.679253 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-notification-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679260 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-notification-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679487 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-central-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679515 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="sg-core" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679528 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-notification-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679550 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="proxy-httpd" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.681179 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.691991 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.692405 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.692651 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.692773 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853524 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853604 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853623 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853684 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853708 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853751 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq8fl\" (UniqueName: \"kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853771 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955554 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955652 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955670 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955699 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955730 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955754 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955797 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq8fl\" (UniqueName: \"kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.956220 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.956791 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.959765 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.960104 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.960139 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.960249 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.962004 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.973181 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq8fl\" (UniqueName: \"kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:47 crc kubenswrapper[5136]: I0320 07:14:47.010055 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:47 crc kubenswrapper[5136]: I0320 07:14:47.475674 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.354695 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerStarted","Data":"0ecf229966ba8c79d4898c6f188447ddf715aa5d36d580596d93cecd4aca45f3"} Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.355009 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerStarted","Data":"a213c0799494e4283f552e4529c929904c7d07c101510facaefb1e2a3e99ab9c"} Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.407097 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" path="/var/lib/kubelet/pods/96b99c4f-60cb-49ec-a2ba-85c6be21bc19/volumes" Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.561596 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.561656 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.911983 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.994053 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle\") pod \"d0496130-a6c4-42b7-8234-4df60e60ed59\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.995267 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh8c2\" (UniqueName: \"kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2\") pod \"d0496130-a6c4-42b7-8234-4df60e60ed59\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.995324 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs\") pod \"d0496130-a6c4-42b7-8234-4df60e60ed59\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.995432 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data\") pod \"d0496130-a6c4-42b7-8234-4df60e60ed59\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.001275 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs" (OuterVolumeSpecName: "logs") pod "d0496130-a6c4-42b7-8234-4df60e60ed59" (UID: "d0496130-a6c4-42b7-8234-4df60e60ed59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.033001 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2" (OuterVolumeSpecName: "kube-api-access-gh8c2") pod "d0496130-a6c4-42b7-8234-4df60e60ed59" (UID: "d0496130-a6c4-42b7-8234-4df60e60ed59"). InnerVolumeSpecName "kube-api-access-gh8c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.041949 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data" (OuterVolumeSpecName: "config-data") pod "d0496130-a6c4-42b7-8234-4df60e60ed59" (UID: "d0496130-a6c4-42b7-8234-4df60e60ed59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.042963 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0496130-a6c4-42b7-8234-4df60e60ed59" (UID: "d0496130-a6c4-42b7-8234-4df60e60ed59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.098539 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.098571 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.098583 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh8c2\" (UniqueName: \"kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.098591 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.366292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerStarted","Data":"cf4672bac844a81b21416c7a8623ac1f87041db75209a7a401cb201726b76413"} Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.373828 5136 generic.go:334] "Generic (PLEG): container finished" podID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerID="bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b" exitCode=0 Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.373865 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerDied","Data":"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b"} Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.373885 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerDied","Data":"3ee4cb0a8e431ba536bfd981a33fd323677b0e4a660774422b45fc0cf650bc2a"} Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.373880 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.373920 5136 scope.go:117] "RemoveContainer" containerID="bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.412667 5136 scope.go:117] "RemoveContainer" containerID="e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.428874 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.443891 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.445789 5136 scope.go:117] "RemoveContainer" containerID="bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b" Mar 20 07:14:49 crc kubenswrapper[5136]: E0320 07:14:49.449201 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b\": container with ID starting with bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b not found: ID does not exist" containerID="bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.449251 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b"} err="failed to get container status \"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b\": rpc error: code = NotFound desc = could not find container \"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b\": container with ID starting with bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b not found: ID does not exist" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.449279 5136 scope.go:117] "RemoveContainer" containerID="e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077" Mar 20 07:14:49 crc kubenswrapper[5136]: E0320 07:14:49.449728 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077\": container with ID starting with e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077 not found: ID does not exist" containerID="e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.449766 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077"} err="failed to get container status \"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077\": rpc error: code = NotFound desc = could not find container \"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077\": container with ID starting with e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077 not found: ID does not exist" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.452851 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:49 crc kubenswrapper[5136]: E0320 07:14:49.453255 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-log" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.453267 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-log" Mar 20 07:14:49 crc kubenswrapper[5136]: E0320 07:14:49.453282 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-api" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.453288 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-api" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.453650 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-log" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.453659 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-api" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.454703 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.457170 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.457598 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.457752 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.487944 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.607896 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.607950 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.608094 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.608218 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.608262 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-766z4\" (UniqueName: \"kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.608293 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.618646 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pfj4j" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="registry-server" probeResult="failure" output=< Mar 20 07:14:49 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:14:49 crc kubenswrapper[5136]: > Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.644927 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.644973 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.671382 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709626 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709702 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-766z4\" (UniqueName: \"kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709755 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709872 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709920 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.710020 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.715205 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.715317 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.727856 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.732984 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.735197 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-766z4\" (UniqueName: \"kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.875508 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.893126 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.386097 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerStarted","Data":"37086a66c3062e12cadb5382a0b51ad5a523fc39db2e404a6feda0518d0eb230"} Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.407397 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" path="/var/lib/kubelet/pods/d0496130-a6c4-42b7-8234-4df60e60ed59/volumes" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.408298 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.432155 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.644222 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-9v9kr"] Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.646131 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.651441 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.651630 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.656138 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.657454 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.660612 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9v9kr"] Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.700632 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsmfl\" (UniqueName: \"kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.700699 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.700900 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.701018 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.803675 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.803753 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.803946 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsmfl\" (UniqueName: \"kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.803980 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.807744 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.808327 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.808679 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.821586 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsmfl\" (UniqueName: \"kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.967645 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:51 crc kubenswrapper[5136]: I0320 07:14:51.412685 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerStarted","Data":"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0"} Mar 20 07:14:51 crc kubenswrapper[5136]: I0320 07:14:51.412964 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerStarted","Data":"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e"} Mar 20 07:14:51 crc kubenswrapper[5136]: I0320 07:14:51.412992 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerStarted","Data":"5093d7bfe73c985d6844a1757ce3dd059b90dc8cab997d996596bdc9609c38fa"} Mar 20 07:14:51 crc kubenswrapper[5136]: I0320 07:14:51.503751 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5037259240000003 podStartE2EDuration="2.503725924s" podCreationTimestamp="2026-03-20 07:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:51.430847846 +0000 UTC m=+1523.690159007" watchObservedRunningTime="2026-03-20 07:14:51.503725924 +0000 UTC m=+1523.763037075" Mar 20 07:14:51 crc kubenswrapper[5136]: W0320 07:14:51.510610 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa8bbe04_14be_44c7_8264_0280abbe2023.slice/crio-be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1 WatchSource:0}: Error finding container be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1: Status 404 returned error can't find the container with id be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1 Mar 20 07:14:51 crc kubenswrapper[5136]: I0320 07:14:51.512237 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9v9kr"] Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.422408 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerStarted","Data":"6f8eb1aeebd08bbda86b110a89e3d6395071812ae31b73c86592f671595b894d"} Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.422741 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.423924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9v9kr" event={"ID":"fa8bbe04-14be-44c7-8264-0280abbe2023","Type":"ContainerStarted","Data":"a35e3106c44fc687668b1f5ba46d5a5060fdc5acc5e49f69f4dc88d5ef142f17"} Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.423973 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9v9kr" event={"ID":"fa8bbe04-14be-44c7-8264-0280abbe2023","Type":"ContainerStarted","Data":"be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1"} Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.446103 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.679692618 podStartE2EDuration="6.446083869s" podCreationTimestamp="2026-03-20 07:14:46 +0000 UTC" firstStartedPulling="2026-03-20 07:14:47.476736143 +0000 UTC m=+1519.736047294" lastFinishedPulling="2026-03-20 07:14:51.243127394 +0000 UTC m=+1523.502438545" observedRunningTime="2026-03-20 07:14:52.443254491 +0000 UTC m=+1524.702565642" watchObservedRunningTime="2026-03-20 07:14:52.446083869 +0000 UTC m=+1524.705395020" Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.458137 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-9v9kr" podStartSLOduration=2.458118904 podStartE2EDuration="2.458118904s" podCreationTimestamp="2026-03-20 07:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:52.457620059 +0000 UTC m=+1524.716931210" watchObservedRunningTime="2026-03-20 07:14:52.458118904 +0000 UTC m=+1524.717430055" Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.877467 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.018466 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.018681 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="dnsmasq-dns" containerID="cri-o://a7d15dbcf3e44927ae943561f1932c75895f70aa4d9c499b6616ad45bc104764" gracePeriod=10 Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.439723 5136 generic.go:334] "Generic (PLEG): container finished" podID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerID="a7d15dbcf3e44927ae943561f1932c75895f70aa4d9c499b6616ad45bc104764" exitCode=0 Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.440453 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" event={"ID":"470e7cfd-fbbb-467e-8115-05cb5654655c","Type":"ContainerDied","Data":"a7d15dbcf3e44927ae943561f1932c75895f70aa4d9c499b6616ad45bc104764"} Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.526489 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.690900 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.690975 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.691015 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.691054 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zhvw\" (UniqueName: \"kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.691163 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.691205 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.698611 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw" (OuterVolumeSpecName: "kube-api-access-5zhvw") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "kube-api-access-5zhvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.737002 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.743229 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config" (OuterVolumeSpecName: "config") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.744491 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.747299 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.758900 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793609 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793647 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793660 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793672 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793685 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zhvw\" (UniqueName: \"kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793697 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.456465 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" event={"ID":"470e7cfd-fbbb-467e-8115-05cb5654655c","Type":"ContainerDied","Data":"0815fbb3bd31db22d56e0ee37bf687b5d4a3901815c3e15d641f9bfe006afa83"} Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.456521 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.456833 5136 scope.go:117] "RemoveContainer" containerID="a7d15dbcf3e44927ae943561f1932c75895f70aa4d9c499b6616ad45bc104764" Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.490543 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.492949 5136 scope.go:117] "RemoveContainer" containerID="296caa72bdf067401801dcafde6b349a8fa9a120a15acef2e0b624bdeebcf37a" Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.515647 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:56 crc kubenswrapper[5136]: I0320 07:14:56.415456 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" path="/var/lib/kubelet/pods/470e7cfd-fbbb-467e-8115-05cb5654655c/volumes" Mar 20 07:14:57 crc kubenswrapper[5136]: I0320 07:14:57.506281 5136 generic.go:334] "Generic (PLEG): container finished" podID="fa8bbe04-14be-44c7-8264-0280abbe2023" containerID="a35e3106c44fc687668b1f5ba46d5a5060fdc5acc5e49f69f4dc88d5ef142f17" exitCode=0 Mar 20 07:14:57 crc kubenswrapper[5136]: I0320 07:14:57.506320 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9v9kr" event={"ID":"fa8bbe04-14be-44c7-8264-0280abbe2023","Type":"ContainerDied","Data":"a35e3106c44fc687668b1f5ba46d5a5060fdc5acc5e49f69f4dc88d5ef142f17"} Mar 20 07:14:57 crc kubenswrapper[5136]: I0320 07:14:57.644163 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:14:57 crc kubenswrapper[5136]: I0320 07:14:57.644208 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:14:58 crc kubenswrapper[5136]: I0320 07:14:58.370040 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: i/o timeout" Mar 20 07:14:58 crc kubenswrapper[5136]: I0320 07:14:58.623876 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:58 crc kubenswrapper[5136]: I0320 07:14:58.690775 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:58 crc kubenswrapper[5136]: I0320 07:14:58.865957 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:14:58 crc kubenswrapper[5136]: I0320 07:14:58.936343 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.091884 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsmfl\" (UniqueName: \"kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl\") pod \"fa8bbe04-14be-44c7-8264-0280abbe2023\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.092051 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle\") pod \"fa8bbe04-14be-44c7-8264-0280abbe2023\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.092100 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data\") pod \"fa8bbe04-14be-44c7-8264-0280abbe2023\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.092174 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts\") pod \"fa8bbe04-14be-44c7-8264-0280abbe2023\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.097190 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts" (OuterVolumeSpecName: "scripts") pod "fa8bbe04-14be-44c7-8264-0280abbe2023" (UID: "fa8bbe04-14be-44c7-8264-0280abbe2023"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.097436 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl" (OuterVolumeSpecName: "kube-api-access-zsmfl") pod "fa8bbe04-14be-44c7-8264-0280abbe2023" (UID: "fa8bbe04-14be-44c7-8264-0280abbe2023"). InnerVolumeSpecName "kube-api-access-zsmfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.124496 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa8bbe04-14be-44c7-8264-0280abbe2023" (UID: "fa8bbe04-14be-44c7-8264-0280abbe2023"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.124947 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data" (OuterVolumeSpecName: "config-data") pod "fa8bbe04-14be-44c7-8264-0280abbe2023" (UID: "fa8bbe04-14be-44c7-8264-0280abbe2023"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.193982 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.194017 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.194029 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.194040 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsmfl\" (UniqueName: \"kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.521588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9v9kr" event={"ID":"fa8bbe04-14be-44c7-8264-0280abbe2023","Type":"ContainerDied","Data":"be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1"} Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.521627 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.521762 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.650086 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.651116 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.656535 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.748018 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.748465 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-log" containerID="cri-o://303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" gracePeriod=30 Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.749002 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-api" containerID="cri-o://321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" gracePeriod=30 Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.763728 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.764063 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerName="nova-scheduler-scheduler" containerID="cri-o://2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" gracePeriod=30 Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.773919 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.163316 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q"] Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.164792 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="dnsmasq-dns" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.164819 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="dnsmasq-dns" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.164883 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="init" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.164894 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="init" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.164912 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8bbe04-14be-44c7-8264-0280abbe2023" containerName="nova-manage" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.164921 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8bbe04-14be-44c7-8264-0280abbe2023" containerName="nova-manage" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.165484 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="dnsmasq-dns" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.165537 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8bbe04-14be-44c7-8264-0280abbe2023" containerName="nova-manage" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.167084 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.175426 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q"] Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.175653 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.182201 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.313351 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.313731 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqcb\" (UniqueName: \"kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.313919 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.320023 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.342687 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.344053 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.349476 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.349562 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerName="nova-scheduler-scheduler" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.415949 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-766z4\" (UniqueName: \"kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416090 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416216 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416260 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416352 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416405 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416894 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416925 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlqcb\" (UniqueName: \"kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.417013 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.417228 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs" (OuterVolumeSpecName: "logs") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.417831 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.421577 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.424108 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4" (OuterVolumeSpecName: "kube-api-access-766z4") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "kube-api-access-766z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.435759 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlqcb\" (UniqueName: \"kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.447717 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.448391 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data" (OuterVolumeSpecName: "config-data") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.477047 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.483752 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.500165 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518875 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518912 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518923 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-766z4\" (UniqueName: \"kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518934 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518942 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518950 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.533232 5136 generic.go:334] "Generic (PLEG): container finished" podID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerID="321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" exitCode=0 Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.533262 5136 generic.go:334] "Generic (PLEG): container finished" podID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerID="303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" exitCode=143 Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.534416 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.536683 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerDied","Data":"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0"} Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.536731 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerDied","Data":"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e"} Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.536745 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerDied","Data":"5093d7bfe73c985d6844a1757ce3dd059b90dc8cab997d996596bdc9609c38fa"} Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.536764 5136 scope.go:117] "RemoveContainer" containerID="321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.537056 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pfj4j" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="registry-server" containerID="cri-o://ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14" gracePeriod=2 Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.542727 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.578519 5136 scope.go:117] "RemoveContainer" containerID="303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.591275 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.608291 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.612202 5136 scope.go:117] "RemoveContainer" containerID="321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.620004 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0\": container with ID starting with 321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0 not found: ID does not exist" containerID="321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.620049 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0"} err="failed to get container status \"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0\": rpc error: code = NotFound desc = could not find container \"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0\": container with ID starting with 321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0 not found: ID does not exist" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.620075 5136 scope.go:117] "RemoveContainer" containerID="303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.620490 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e\": container with ID starting with 303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e not found: ID does not exist" containerID="303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.620544 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e"} err="failed to get container status \"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e\": rpc error: code = NotFound desc = could not find container \"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e\": container with ID starting with 303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e not found: ID does not exist" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.620575 5136 scope.go:117] "RemoveContainer" containerID="321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.622843 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.623247 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-log" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.623259 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-log" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.623279 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-api" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.623284 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-api" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.623450 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-log" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.623475 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-api" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.624418 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.627059 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0"} err="failed to get container status \"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0\": rpc error: code = NotFound desc = could not find container \"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0\": container with ID starting with 321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0 not found: ID does not exist" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.627113 5136 scope.go:117] "RemoveContainer" containerID="303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.629270 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e"} err="failed to get container status \"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e\": rpc error: code = NotFound desc = could not find container \"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e\": container with ID starting with 303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e not found: ID does not exist" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.629426 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.629458 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.629786 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.640613 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.722758 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krchg\" (UniqueName: \"kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.722926 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.722986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.723028 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.723050 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.723101 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.824843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.824878 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.824930 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.824954 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krchg\" (UniqueName: \"kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.824999 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.825049 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.829253 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.832722 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.833001 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.833071 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.833535 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.842452 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krchg\" (UniqueName: \"kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.985398 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.010429 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q"] Mar 20 07:15:01 crc kubenswrapper[5136]: W0320 07:15:01.047083 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f40568b_2bbc_4d1e_b089_6e08e1eede4b.slice/crio-ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0 WatchSource:0}: Error finding container ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0: Status 404 returned error can't find the container with id ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0 Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.232206 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.333417 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content\") pod \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.333809 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities\") pod \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.333922 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5pm5\" (UniqueName: \"kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5\") pod \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.335209 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities" (OuterVolumeSpecName: "utilities") pod "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" (UID: "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.339209 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5" (OuterVolumeSpecName: "kube-api-access-t5pm5") pod "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" (UID: "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7"). InnerVolumeSpecName "kube-api-access-t5pm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.438155 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5pm5\" (UniqueName: \"kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.438184 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:01 crc kubenswrapper[5136]: W0320 07:15:01.444853 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dc2d320_2468_4a45_ba6b_69ea478b5e8c.slice/crio-3e1ef3947decd903ee68750e425aae8b49ed5ff25716e3720a5bb3892e21abbc WatchSource:0}: Error finding container 3e1ef3947decd903ee68750e425aae8b49ed5ff25716e3720a5bb3892e21abbc: Status 404 returned error can't find the container with id 3e1ef3947decd903ee68750e425aae8b49ed5ff25716e3720a5bb3892e21abbc Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.450622 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.481783 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" (UID: "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.540378 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.542098 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerStarted","Data":"3e1ef3947decd903ee68750e425aae8b49ed5ff25716e3720a5bb3892e21abbc"} Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.543603 5136 generic.go:334] "Generic (PLEG): container finished" podID="6f40568b-2bbc-4d1e-b089-6e08e1eede4b" containerID="9f24a13849a44546b978a1e086eb14881e8d529298f6ffe2023d8ef7f1bdc4c6" exitCode=0 Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.543667 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" event={"ID":"6f40568b-2bbc-4d1e-b089-6e08e1eede4b","Type":"ContainerDied","Data":"9f24a13849a44546b978a1e086eb14881e8d529298f6ffe2023d8ef7f1bdc4c6"} Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.543696 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" event={"ID":"6f40568b-2bbc-4d1e-b089-6e08e1eede4b","Type":"ContainerStarted","Data":"ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0"} Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547130 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerID="ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14" exitCode=0 Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547172 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerDied","Data":"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14"} Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547189 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547235 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerDied","Data":"14fe667f3db588129ef772a0e1c0daeddde9622aa1ffb9941f75f06fb9c1984f"} Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547258 5136 scope.go:117] "RemoveContainer" containerID="ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547337 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-log" containerID="cri-o://9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8" gracePeriod=30 Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547371 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-metadata" containerID="cri-o://5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8" gracePeriod=30 Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.576038 5136 scope.go:117] "RemoveContainer" containerID="3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.589816 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.597602 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.626513 5136 scope.go:117] "RemoveContainer" containerID="05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.645736 5136 scope.go:117] "RemoveContainer" containerID="ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14" Mar 20 07:15:01 crc kubenswrapper[5136]: E0320 07:15:01.646147 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14\": container with ID starting with ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14 not found: ID does not exist" containerID="ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.646201 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14"} err="failed to get container status \"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14\": rpc error: code = NotFound desc = could not find container \"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14\": container with ID starting with ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14 not found: ID does not exist" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.646228 5136 scope.go:117] "RemoveContainer" containerID="3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab" Mar 20 07:15:01 crc kubenswrapper[5136]: E0320 07:15:01.646558 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab\": container with ID starting with 3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab not found: ID does not exist" containerID="3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.646605 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab"} err="failed to get container status \"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab\": rpc error: code = NotFound desc = could not find container \"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab\": container with ID starting with 3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab not found: ID does not exist" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.646649 5136 scope.go:117] "RemoveContainer" containerID="05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17" Mar 20 07:15:01 crc kubenswrapper[5136]: E0320 07:15:01.646955 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17\": container with ID starting with 05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17 not found: ID does not exist" containerID="05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.647009 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17"} err="failed to get container status \"05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17\": rpc error: code = NotFound desc = could not find container \"05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17\": container with ID starting with 05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17 not found: ID does not exist" Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.409151 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" path="/var/lib/kubelet/pods/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7/volumes" Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.410730 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" path="/var/lib/kubelet/pods/4c230e4e-a220-4596-8c60-ffd4a7b86cb9/volumes" Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.565620 5136 generic.go:334] "Generic (PLEG): container finished" podID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerID="9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8" exitCode=143 Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.565689 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerDied","Data":"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8"} Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.567885 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerStarted","Data":"d7a0a1abaf7649e23b98061506f3cd0ac2d6d6bb3c69694e84fdfcd0ac7ca124"} Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.567934 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerStarted","Data":"a74fc94f08f1ff50393fca876adcf8dbd23397ae767f8be9bb23ab400c14c48a"} Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.604133 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6041116989999997 podStartE2EDuration="2.604111699s" podCreationTimestamp="2026-03-20 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:15:02.592391914 +0000 UTC m=+1534.851703105" watchObservedRunningTime="2026-03-20 07:15:02.604111699 +0000 UTC m=+1534.863422860" Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.959082 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.068783 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume\") pod \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.069006 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlqcb\" (UniqueName: \"kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb\") pod \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.069188 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume\") pod \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.069535 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume" (OuterVolumeSpecName: "config-volume") pod "6f40568b-2bbc-4d1e-b089-6e08e1eede4b" (UID: "6f40568b-2bbc-4d1e-b089-6e08e1eede4b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.070029 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.074955 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb" (OuterVolumeSpecName: "kube-api-access-rlqcb") pod "6f40568b-2bbc-4d1e-b089-6e08e1eede4b" (UID: "6f40568b-2bbc-4d1e-b089-6e08e1eede4b"). InnerVolumeSpecName "kube-api-access-rlqcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.075402 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6f40568b-2bbc-4d1e-b089-6e08e1eede4b" (UID: "6f40568b-2bbc-4d1e-b089-6e08e1eede4b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.171735 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlqcb\" (UniqueName: \"kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.171774 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.578121 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.578162 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" event={"ID":"6f40568b-2bbc-4d1e-b089-6e08e1eede4b","Type":"ContainerDied","Data":"ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0"} Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.578183 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.588191 5136 generic.go:334] "Generic (PLEG): container finished" podID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerID="2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" exitCode=0 Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.588237 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a52f0c9-0dde-48d7-83a3-bb05b1217295","Type":"ContainerDied","Data":"2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528"} Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.588527 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a52f0c9-0dde-48d7-83a3-bb05b1217295","Type":"ContainerDied","Data":"746593fd75baa00dce61da5dddc1f1d308565b21d07137d5d833134ea9410d34"} Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.588542 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="746593fd75baa00dce61da5dddc1f1d308565b21d07137d5d833134ea9410d34" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.656161 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.804730 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data\") pod \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.805115 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle\") pod \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.805223 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shtw7\" (UniqueName: \"kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7\") pod \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.819707 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7" (OuterVolumeSpecName: "kube-api-access-shtw7") pod "6a52f0c9-0dde-48d7-83a3-bb05b1217295" (UID: "6a52f0c9-0dde-48d7-83a3-bb05b1217295"). InnerVolumeSpecName "kube-api-access-shtw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.836379 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data" (OuterVolumeSpecName: "config-data") pod "6a52f0c9-0dde-48d7-83a3-bb05b1217295" (UID: "6a52f0c9-0dde-48d7-83a3-bb05b1217295"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.851057 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a52f0c9-0dde-48d7-83a3-bb05b1217295" (UID: "6a52f0c9-0dde-48d7-83a3-bb05b1217295"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.907548 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.907585 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.907598 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shtw7\" (UniqueName: \"kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.994127 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.110573 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs\") pod \"4622969f-2f2e-42d7-81a6-bc6baa386aec\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.110756 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwgkt\" (UniqueName: \"kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt\") pod \"4622969f-2f2e-42d7-81a6-bc6baa386aec\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.110875 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data\") pod \"4622969f-2f2e-42d7-81a6-bc6baa386aec\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.111178 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle\") pod \"4622969f-2f2e-42d7-81a6-bc6baa386aec\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.111230 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs\") pod \"4622969f-2f2e-42d7-81a6-bc6baa386aec\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.112300 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs" (OuterVolumeSpecName: "logs") pod "4622969f-2f2e-42d7-81a6-bc6baa386aec" (UID: "4622969f-2f2e-42d7-81a6-bc6baa386aec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.113859 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt" (OuterVolumeSpecName: "kube-api-access-lwgkt") pod "4622969f-2f2e-42d7-81a6-bc6baa386aec" (UID: "4622969f-2f2e-42d7-81a6-bc6baa386aec"). InnerVolumeSpecName "kube-api-access-lwgkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.138259 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data" (OuterVolumeSpecName: "config-data") pod "4622969f-2f2e-42d7-81a6-bc6baa386aec" (UID: "4622969f-2f2e-42d7-81a6-bc6baa386aec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.146956 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4622969f-2f2e-42d7-81a6-bc6baa386aec" (UID: "4622969f-2f2e-42d7-81a6-bc6baa386aec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.156812 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4622969f-2f2e-42d7-81a6-bc6baa386aec" (UID: "4622969f-2f2e-42d7-81a6-bc6baa386aec"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.213775 5136 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.213811 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwgkt\" (UniqueName: \"kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.213843 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.213861 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.213871 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598701 5136 generic.go:334] "Generic (PLEG): container finished" podID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerID="5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8" exitCode=0 Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598764 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerDied","Data":"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8"} Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598834 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerDied","Data":"c57256ebbbe3274d2806f8f41571ca74c969c50a2ce02b08771f1ab16c5bdd1d"} Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598791 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598783 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598882 5136 scope.go:117] "RemoveContainer" containerID="5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.621330 5136 scope.go:117] "RemoveContainer" containerID="9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.640097 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.649361 5136 scope.go:117] "RemoveContainer" containerID="5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.649764 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8\": container with ID starting with 5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8 not found: ID does not exist" containerID="5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.649803 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8"} err="failed to get container status \"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8\": rpc error: code = NotFound desc = could not find container \"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8\": container with ID starting with 5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8 not found: ID does not exist" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.649836 5136 scope.go:117] "RemoveContainer" containerID="9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.650031 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8\": container with ID starting with 9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8 not found: ID does not exist" containerID="9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.650052 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8"} err="failed to get container status \"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8\": rpc error: code = NotFound desc = could not find container \"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8\": container with ID starting with 9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8 not found: ID does not exist" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.656215 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.670326 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.686957 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.694513 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.694933 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerName="nova-scheduler-scheduler" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.694946 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerName="nova-scheduler-scheduler" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.694957 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-metadata" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.694963 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-metadata" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.694973 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f40568b-2bbc-4d1e-b089-6e08e1eede4b" containerName="collect-profiles" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.694979 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f40568b-2bbc-4d1e-b089-6e08e1eede4b" containerName="collect-profiles" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.694990 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="extract-utilities" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.694996 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="extract-utilities" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.695006 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-log" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695012 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-log" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.695030 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="extract-content" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695035 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="extract-content" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.695053 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="registry-server" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695059 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="registry-server" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695281 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="registry-server" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695292 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-log" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695303 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-metadata" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695314 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f40568b-2bbc-4d1e-b089-6e08e1eede4b" containerName="collect-profiles" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695327 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerName="nova-scheduler-scheduler" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695947 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.707477 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.708252 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.710728 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.712419 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.715589 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.715707 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.724129 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.825996 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826066 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88mxs\" (UniqueName: \"kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826096 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826244 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826320 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826473 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826553 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbvkl\" (UniqueName: \"kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826670 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928080 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbvkl\" (UniqueName: \"kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928213 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928247 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928288 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88mxs\" (UniqueName: \"kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928319 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928350 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.931442 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.934040 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.934300 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.934534 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.934876 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.935293 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.950570 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbvkl\" (UniqueName: \"kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.952587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88mxs\" (UniqueName: \"kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.033679 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.051988 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.412591 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" path="/var/lib/kubelet/pods/4622969f-2f2e-42d7-81a6-bc6baa386aec/volumes" Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.413725 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" path="/var/lib/kubelet/pods/6a52f0c9-0dde-48d7-83a3-bb05b1217295/volumes" Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.466433 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:06 crc kubenswrapper[5136]: W0320 07:15:06.468580 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2e8f54f_5434_4cf0_94b9_38648bf7ba77.slice/crio-cc1f222689540ab41cb0293a65f9305d971ad1f909b8b87d7d0b7c47db1a4f3a WatchSource:0}: Error finding container cc1f222689540ab41cb0293a65f9305d971ad1f909b8b87d7d0b7c47db1a4f3a: Status 404 returned error can't find the container with id cc1f222689540ab41cb0293a65f9305d971ad1f909b8b87d7d0b7c47db1a4f3a Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.564921 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:06 crc kubenswrapper[5136]: W0320 07:15:06.566779 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf66742a_1452_436f_a22e_7dc277cf690a.slice/crio-5ace71694fb7cbf300c7ddec51eede62a282d34c72dfdbb59b1c4aee86e0f888 WatchSource:0}: Error finding container 5ace71694fb7cbf300c7ddec51eede62a282d34c72dfdbb59b1c4aee86e0f888: Status 404 returned error can't find the container with id 5ace71694fb7cbf300c7ddec51eede62a282d34c72dfdbb59b1c4aee86e0f888 Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.612995 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f2e8f54f-5434-4cf0-94b9-38648bf7ba77","Type":"ContainerStarted","Data":"cc1f222689540ab41cb0293a65f9305d971ad1f909b8b87d7d0b7c47db1a4f3a"} Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.614802 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerStarted","Data":"5ace71694fb7cbf300c7ddec51eede62a282d34c72dfdbb59b1c4aee86e0f888"} Mar 20 07:15:07 crc kubenswrapper[5136]: I0320 07:15:07.624470 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerStarted","Data":"f58e5688042713c0b783865903009ad2d11f4198be5353e16c7c078fdfea1674"} Mar 20 07:15:07 crc kubenswrapper[5136]: I0320 07:15:07.624782 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerStarted","Data":"deb975c81a5be70590a5a6b6abaf2e6a45b6848b791c313d96f348b2eb335fa8"} Mar 20 07:15:07 crc kubenswrapper[5136]: I0320 07:15:07.628293 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f2e8f54f-5434-4cf0-94b9-38648bf7ba77","Type":"ContainerStarted","Data":"103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5"} Mar 20 07:15:07 crc kubenswrapper[5136]: I0320 07:15:07.650071 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.650054149 podStartE2EDuration="2.650054149s" podCreationTimestamp="2026-03-20 07:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:15:07.647965154 +0000 UTC m=+1539.907276385" watchObservedRunningTime="2026-03-20 07:15:07.650054149 +0000 UTC m=+1539.909365300" Mar 20 07:15:07 crc kubenswrapper[5136]: I0320 07:15:07.686610 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.686584326 podStartE2EDuration="2.686584326s" podCreationTimestamp="2026-03-20 07:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:15:07.668467803 +0000 UTC m=+1539.927779014" watchObservedRunningTime="2026-03-20 07:15:07.686584326 +0000 UTC m=+1539.945895517" Mar 20 07:15:10 crc kubenswrapper[5136]: I0320 07:15:10.986474 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:15:10 crc kubenswrapper[5136]: I0320 07:15:10.986932 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:15:11 crc kubenswrapper[5136]: I0320 07:15:11.034193 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 07:15:11 crc kubenswrapper[5136]: I0320 07:15:11.998057 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:11 crc kubenswrapper[5136]: I0320 07:15:11.998100 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:15 crc kubenswrapper[5136]: I0320 07:15:15.822288 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:15:15 crc kubenswrapper[5136]: I0320 07:15:15.823153 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:15:16 crc kubenswrapper[5136]: I0320 07:15:16.034010 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 07:15:16 crc kubenswrapper[5136]: I0320 07:15:16.053118 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 07:15:16 crc kubenswrapper[5136]: I0320 07:15:16.053179 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 07:15:16 crc kubenswrapper[5136]: I0320 07:15:16.059920 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 07:15:16 crc kubenswrapper[5136]: I0320 07:15:16.764420 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 07:15:17 crc kubenswrapper[5136]: I0320 07:15:17.030023 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 20 07:15:17 crc kubenswrapper[5136]: I0320 07:15:17.073974 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:17 crc kubenswrapper[5136]: I0320 07:15:17.074339 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:18 crc kubenswrapper[5136]: I0320 07:15:18.985983 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 07:15:18 crc kubenswrapper[5136]: I0320 07:15:18.986318 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 07:15:20 crc kubenswrapper[5136]: I0320 07:15:20.996772 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 07:15:21 crc kubenswrapper[5136]: I0320 07:15:21.000545 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 07:15:21 crc kubenswrapper[5136]: I0320 07:15:21.004778 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 07:15:21 crc kubenswrapper[5136]: I0320 07:15:21.793745 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 07:15:24 crc kubenswrapper[5136]: I0320 07:15:24.052865 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:15:24 crc kubenswrapper[5136]: I0320 07:15:24.053313 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:15:26 crc kubenswrapper[5136]: I0320 07:15:26.057616 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 07:15:26 crc kubenswrapper[5136]: I0320 07:15:26.057985 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 07:15:26 crc kubenswrapper[5136]: I0320 07:15:26.062831 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 07:15:26 crc kubenswrapper[5136]: I0320 07:15:26.063672 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 07:15:45 crc kubenswrapper[5136]: I0320 07:15:45.822186 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:15:45 crc kubenswrapper[5136]: I0320 07:15:45.824184 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:15:45 crc kubenswrapper[5136]: I0320 07:15:45.824665 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:15:45 crc kubenswrapper[5136]: I0320 07:15:45.825525 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:15:45 crc kubenswrapper[5136]: I0320 07:15:45.825599 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6" gracePeriod=600 Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.111562 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6" exitCode=0 Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.111888 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6"} Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.111920 5136 scope.go:117] "RemoveContainer" containerID="e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.175183 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5429-account-create-update-kc9f7"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.236898 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5429-account-create-update-kc9f7"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.317750 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.320398 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.325837 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.348175 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-b5fwk"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.362932 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.377857 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-zlrc6"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.388743 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.388864 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgnfv\" (UniqueName: \"kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.392408 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-b5fwk"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.418058 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44ee109-b721-41c2-bc45-8c6097d31402" path="/var/lib/kubelet/pods/c44ee109-b721-41c2-bc45-8c6097d31402/volumes" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.418634 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1091b0-0c0e-40a9-9131-93d8e912d0af" path="/var/lib/kubelet/pods/ec1091b0-0c0e-40a9-9131-93d8e912d0af/volumes" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.419158 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-zlrc6"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.434614 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.440895 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.454062 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.472206 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.490302 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgnfv\" (UniqueName: \"kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.490433 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.491700 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.513721 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.515033 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.517938 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.533895 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.534202 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="17ad787b-18bc-4afd-840b-2458b494094a" containerName="openstackclient" containerID="cri-o://ef5361e0b73e9c41cc23b5ebe9348fce6a363e59e0bc84a305ad44756dd780af" gracePeriod=2 Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.557738 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgnfv\" (UniqueName: \"kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.593289 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knrlt\" (UniqueName: \"kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.593523 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.601959 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.613302 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.631558 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.642551 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.694622 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.694668 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h64x\" (UniqueName: \"kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.694695 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.694736 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knrlt\" (UniqueName: \"kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.695235 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.738975 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.739270 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="ovn-northd" containerID="cri-o://91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" gracePeriod=30 Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.739691 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="openstack-network-exporter" containerID="cri-o://e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf" gracePeriod=30 Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.750435 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knrlt\" (UniqueName: \"kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.762489 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.781658 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fdc6-account-create-update-sfc2q"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.796604 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h64x\" (UniqueName: \"kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.797194 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.798648 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.812882 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fdc6-account-create-update-sfc2q"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.855590 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h64x\" (UniqueName: \"kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.868959 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-ntmkb"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.884543 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.948582 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-ntmkb"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.020957 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:47 crc kubenswrapper[5136]: E0320 07:15:47.028747 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ad787b-18bc-4afd-840b-2458b494094a" containerName="openstackclient" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.028779 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ad787b-18bc-4afd-840b-2458b494094a" containerName="openstackclient" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.029077 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ad787b-18bc-4afd-840b-2458b494094a" containerName="openstackclient" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.029709 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.034757 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.034885 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.036197 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.042575 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.096669 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.106897 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.108303 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.114838 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.117276 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.127081 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.156881 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.157560 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="openstack-network-exporter" containerID="cri-o://ad85344499cd2c34ea152d61f6efd8d5a2edf8814c85572a74d76108d11d3655" gracePeriod=300 Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.167231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d"} Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.216924 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgbc2\" (UniqueName: \"kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.217002 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.217021 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.217067 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.217093 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzplt\" (UniqueName: \"kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.217113 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x89p2\" (UniqueName: \"kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.222960 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-lv952"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.245794 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-lv952"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.258325 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.258676 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="openstack-network-exporter" containerID="cri-o://55e70c80be714d08791bbb875a2885eb056808546361bafa1ce59b4a2b4afd94" gracePeriod=300 Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.277932 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-llt2h"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.302936 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a0f6-account-create-update-c9hl7"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319370 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgbc2\" (UniqueName: \"kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319460 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319485 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319535 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzplt\" (UniqueName: \"kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319608 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x89p2\" (UniqueName: \"kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.321069 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.321534 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.322082 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.357722 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-llt2h"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.396566 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a0f6-account-create-update-c9hl7"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.397880 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x89p2\" (UniqueName: \"kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.397920 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgbc2\" (UniqueName: \"kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.429395 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzplt\" (UniqueName: \"kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.434129 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a033-account-create-update-ww8m7"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.466870 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a033-account-create-update-ww8m7"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.486255 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bc06-account-create-update-lm56h"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.494388 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.557017 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="ovsdbserver-nb" containerID="cri-o://c02c0e7e0e6b0a33a002d424a3ac60fcdca9be308ea7764e0da41ae85bb3639a" gracePeriod=300 Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.557395 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="ovsdbserver-sb" containerID="cri-o://7f81f78f97fc5d48f48b6354b794c050f707e5b35fc6d46c7df2de9e4878960b" gracePeriod=300 Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.564687 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bc06-account-create-update-lm56h"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.569571 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.583869 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jv7f9"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.598072 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jv7f9"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.608802 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.656748 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ldzkm"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.675466 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ldzkm"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.701056 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.736111 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-n6cqg"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.757003 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-n6cqg"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.778556 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5brf"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.792264 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5brf"] Mar 20 07:15:47 crc kubenswrapper[5136]: E0320 07:15:47.844246 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:47 crc kubenswrapper[5136]: E0320 07:15:47.844728 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data podName:c355061d-c5fd-4655-aa7e-37b5a40a0400 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:48.344709143 +0000 UTC m=+1580.604020294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data") pod "rabbitmq-cell1-server-0" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400") : configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.886621 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.902509 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:15:47 crc kubenswrapper[5136]: E0320 07:15:47.946493 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 07:15:47 crc kubenswrapper[5136]: E0320 07:15:47.983962 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.987876 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-9v9kr"] Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.021398 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:48 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: if [ -n "barbican" ]; then Mar 20 07:15:48 crc kubenswrapper[5136]: GRANT_DATABASE="barbican" Mar 20 07:15:48 crc kubenswrapper[5136]: else Mar 20 07:15:48 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:48 crc kubenswrapper[5136]: fi Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:48 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:48 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:48 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:48 crc kubenswrapper[5136]: # support updates Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.029268 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-5429-account-create-update-54j5b" podUID="79272887-6a7f-4336-858a-6844ed6e8a37" Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.059770 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.060636 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="ovn-northd" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.083446 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-9v9kr"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.185005 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.185706 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-vr74x" podUID="0ede60bf-5bc5-4267-9849-9389df070048" containerName="openstack-network-exporter" containerID="cri-o://c83952221ac9ae15d237b01aa417d2a8651bd6786c0034250cebe0e17be31690" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.200327 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kxk7p"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.210862 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kxk7p"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.210968 5136 generic.go:334] "Generic (PLEG): container finished" podID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerID="e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf" exitCode=2 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.211060 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerDied","Data":"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf"} Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.216328 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.216596 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="dnsmasq-dns" containerID="cri-o://ccb4f9c0c6dc989c486c61d0a17af6a9e3438c25ae843380545c453141823051" gracePeriod=10 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.218568 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f872c575-a357-4b29-b5e8-cf5dbe6f3d7a/ovsdbserver-sb/0.log" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.218605 5136 generic.go:334] "Generic (PLEG): container finished" podID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerID="55e70c80be714d08791bbb875a2885eb056808546361bafa1ce59b4a2b4afd94" exitCode=2 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.218621 5136 generic.go:334] "Generic (PLEG): container finished" podID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerID="7f81f78f97fc5d48f48b6354b794c050f707e5b35fc6d46c7df2de9e4878960b" exitCode=143 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.218663 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerDied","Data":"55e70c80be714d08791bbb875a2885eb056808546361bafa1ce59b4a2b4afd94"} Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.218698 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerDied","Data":"7f81f78f97fc5d48f48b6354b794c050f707e5b35fc6d46c7df2de9e4878960b"} Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.244468 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5429-account-create-update-54j5b" event={"ID":"79272887-6a7f-4336-858a-6844ed6e8a37","Type":"ContainerStarted","Data":"7d5996405d499205fc914d73a14603978ef7492a89b03a07f615cb81cd56c34d"} Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.246904 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:48 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: if [ -n "barbican" ]; then Mar 20 07:15:48 crc kubenswrapper[5136]: GRANT_DATABASE="barbican" Mar 20 07:15:48 crc kubenswrapper[5136]: else Mar 20 07:15:48 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:48 crc kubenswrapper[5136]: fi Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:48 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:48 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:48 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:48 crc kubenswrapper[5136]: # support updates Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.259979 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-v7xvp"] Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.260496 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-5429-account-create-update-54j5b" podUID="79272887-6a7f-4336-858a-6844ed6e8a37" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.274930 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8b1461d1-f963-40b0-8cad-a5b2735eedcc/ovsdbserver-nb/0.log" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.275197 5136 generic.go:334] "Generic (PLEG): container finished" podID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerID="ad85344499cd2c34ea152d61f6efd8d5a2edf8814c85572a74d76108d11d3655" exitCode=2 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.279366 5136 generic.go:334] "Generic (PLEG): container finished" podID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerID="c02c0e7e0e6b0a33a002d424a3ac60fcdca9be308ea7764e0da41ae85bb3639a" exitCode=143 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.277156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerDied","Data":"ad85344499cd2c34ea152d61f6efd8d5a2edf8814c85572a74d76108d11d3655"} Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.279568 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerDied","Data":"c02c0e7e0e6b0a33a002d424a3ac60fcdca9be308ea7764e0da41ae85bb3639a"} Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.279602 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-v7xvp"] Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.381048 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.381153 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data podName:c355061d-c5fd-4655-aa7e-37b5a40a0400 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:49.381118015 +0000 UTC m=+1581.640429166 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data") pod "rabbitmq-cell1-server-0" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400") : configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.391906 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.392398 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-server" containerID="cri-o://83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393324 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-updater" containerID="cri-o://e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393440 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="rsync" containerID="cri-o://32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393516 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-expirer" containerID="cri-o://34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393587 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-updater" containerID="cri-o://81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393656 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-auditor" containerID="cri-o://1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393711 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-replicator" containerID="cri-o://9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393754 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-server" containerID="cri-o://09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393982 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-reaper" containerID="cri-o://c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.394068 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-auditor" containerID="cri-o://cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.394147 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-replicator" containerID="cri-o://8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.394223 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-server" containerID="cri-o://2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.394291 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-replicator" containerID="cri-o://2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.394343 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-auditor" containerID="cri-o://ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.396909 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="swift-recon-cron" containerID="cri-o://f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.453744 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" path="/var/lib/kubelet/pods/16f28a76-f7a5-4980-a693-7bd078f3c128/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.454583 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1492b7-73df-440c-9246-ae0e3c2e8802" path="/var/lib/kubelet/pods/2a1492b7-73df-440c-9246-ae0e3c2e8802/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.455091 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" path="/var/lib/kubelet/pods/2fc03366-82a1-4e30-a7e8-a06e16a8a14f/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.456805 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5241dc-9fdc-4e75-9924-fb00a2e6119d" path="/var/lib/kubelet/pods/4f5241dc-9fdc-4e75-9924-fb00a2e6119d/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.459670 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52702304-46c3-4028-af56-60e936dea0a9" path="/var/lib/kubelet/pods/52702304-46c3-4028-af56-60e936dea0a9/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.466441 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61300b5b-7c36-4857-a0bf-631bf3cbb001" path="/var/lib/kubelet/pods/61300b5b-7c36-4857-a0bf-631bf3cbb001/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.467104 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd262d5-bfc7-49ae-908e-709fa9d0f55f" path="/var/lib/kubelet/pods/7fd262d5-bfc7-49ae-908e-709fa9d0f55f/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.467786 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81055905-a498-49a7-917a-2032a292710e" path="/var/lib/kubelet/pods/81055905-a498-49a7-917a-2032a292710e/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.469193 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" path="/var/lib/kubelet/pods/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.470175 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" path="/var/lib/kubelet/pods/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.470675 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" path="/var/lib/kubelet/pods/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.471469 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfe42cb-9794-449c-8ad8-54d68bf21607" path="/var/lib/kubelet/pods/ccfe42cb-9794-449c-8ad8-54d68bf21607/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.474092 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" path="/var/lib/kubelet/pods/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.474561 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91601d4-11a0-4327-8f7e-6856df2b4643" path="/var/lib/kubelet/pods/f91601d4-11a0-4327-8f7e-6856df2b4643/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.475039 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8bbe04-14be-44c7-8264-0280abbe2023" path="/var/lib/kubelet/pods/fa8bbe04-14be-44c7-8264-0280abbe2023/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.475681 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.478358 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.478572 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="cinder-scheduler" containerID="cri-o://70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.478885 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="probe" containerID="cri-o://46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.486763 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qrg9s"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.539626 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qrg9s"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.540957 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" containerID="cri-o://f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.544623 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8b1461d1-f963-40b0-8cad-a5b2735eedcc/ovsdbserver-nb/0.log" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.544706 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.609236 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.609956 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api-log" containerID="cri-o://c15b277e6d0d090e0e5755609decc556ab1c1f03a878f14749a60fdfeeec941e" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.610542 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api" containerID="cri-o://f3795d92724d61612c4e998a075d7ebdd89fee122f5c02cbcebdad3f46cd4b7c" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.652072 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.652550 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-log" containerID="cri-o://ddf75c942dcbf834dfd88d5bc8a1e8a0fa00deb6223f84700bb4c75bb0cce612" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.652938 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-httpd" containerID="cri-o://08811e57d5ae08f29bf6ac8aa7f95e929dc4a7c310d13f900fb2c645979418d6" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.684686 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.684945 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-dc8db4fdb-hpjdg" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-log" containerID="cri-o://14f94b6d1dd07b874e83aed25b1716c42ede7203afe8fc38064921b976f5c65d" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.687973 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-dc8db4fdb-hpjdg" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-api" containerID="cri-o://605e2f1b6fdab04852864ae8ba9a1933cc6fbe478b172080fd10f5d23b52f0fe" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699423 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjbm7\" (UniqueName: \"kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699524 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699586 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699689 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699705 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699761 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699798 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.707377 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.709394 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts" (OuterVolumeSpecName: "scripts") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.709753 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config" (OuterVolumeSpecName: "config") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.718179 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.762096 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7" (OuterVolumeSpecName: "kube-api-access-pjbm7") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "kube-api-access-pjbm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.792051 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-c6tbf"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.810654 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.810789 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.810895 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.811247 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjbm7\" (UniqueName: \"kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.811326 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.846867 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-c6tbf"] Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.851290 5136 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 20 07:15:48 crc kubenswrapper[5136]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 20 07:15:48 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNBridge=br-int Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNRemote=tcp:localhost:6642 Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNEncapType=geneve Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNAvailabilityZones= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ EnableChassisAsGateway=true Mar 20 07:15:48 crc kubenswrapper[5136]: ++ PhysicalNetworks= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNHostName= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 20 07:15:48 crc kubenswrapper[5136]: ++ ovs_dir=/var/lib/openvswitch Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 20 07:15:48 crc kubenswrapper[5136]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + sleep 0.5 Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + cleanup_ovsdb_server_semaphore Mar 20 07:15:48 crc kubenswrapper[5136]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 20 07:15:48 crc kubenswrapper[5136]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-ldp4w" message=< Mar 20 07:15:48 crc kubenswrapper[5136]: Exiting ovsdb-server (5) [ OK ] Mar 20 07:15:48 crc kubenswrapper[5136]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 20 07:15:48 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNBridge=br-int Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNRemote=tcp:localhost:6642 Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNEncapType=geneve Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNAvailabilityZones= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ EnableChassisAsGateway=true Mar 20 07:15:48 crc kubenswrapper[5136]: ++ PhysicalNetworks= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNHostName= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 20 07:15:48 crc kubenswrapper[5136]: ++ ovs_dir=/var/lib/openvswitch Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 20 07:15:48 crc kubenswrapper[5136]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + sleep 0.5 Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + cleanup_ovsdb_server_semaphore Mar 20 07:15:48 crc kubenswrapper[5136]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 20 07:15:48 crc kubenswrapper[5136]: > Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.851371 5136 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 20 07:15:48 crc kubenswrapper[5136]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 20 07:15:48 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNBridge=br-int Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNRemote=tcp:localhost:6642 Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNEncapType=geneve Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNAvailabilityZones= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ EnableChassisAsGateway=true Mar 20 07:15:48 crc kubenswrapper[5136]: ++ PhysicalNetworks= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNHostName= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 20 07:15:48 crc kubenswrapper[5136]: ++ ovs_dir=/var/lib/openvswitch Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 20 07:15:48 crc kubenswrapper[5136]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + sleep 0.5 Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + cleanup_ovsdb_server_semaphore Mar 20 07:15:48 crc kubenswrapper[5136]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 20 07:15:48 crc kubenswrapper[5136]: > pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" containerID="cri-o://5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.851432 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" containerID="cri-o://5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.867107 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.867403 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-log" containerID="cri-o://662932f3f7c10a1e5293dce36be1a7b6f6fe4dc40e2e2d324b7c68188c034162" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.867953 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-httpd" containerID="cri-o://3cd1e6e7f78367e01aa376387fc42404408757a25929ca8c639774857d99ccfd" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.875186 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.875866 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6ff4f58fb9-7gtff" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-api" containerID="cri-o://8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.876252 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6ff4f58fb9-7gtff" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-httpd" containerID="cri-o://f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.887884 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.888492 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7vvbn"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.898978 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7vvbn"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.908640 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.908903 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-log" containerID="cri-o://deb975c81a5be70590a5a6b6abaf2e6a45b6848b791c313d96f348b2eb335fa8" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.909677 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-metadata" containerID="cri-o://f58e5688042713c0b783865903009ad2d11f4198be5353e16c7c078fdfea1674" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.913368 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.918578 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.923292 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.941037 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.962887 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.971199 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.971590 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-log" containerID="cri-o://a74fc94f08f1ff50393fca876adcf8dbd23397ae767f8be9bb23ab400c14c48a" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.972007 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-api" containerID="cri-o://d7a0a1abaf7649e23b98061506f3cd0ac2d6d6bb3c69694e84fdfcd0ac7ca124" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.986374 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xpg98"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.018309 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xpg98"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.058345 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.066962 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.067243 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-744d6f84fc-bqcsc" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-httpd" containerID="cri-o://0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.067699 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-744d6f84fc-bqcsc" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-server" containerID="cri-o://793f74839b7ab06cf4c132cbd574d3bb47712ec9caa473ebbd084643fcce6f31" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.099419 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-bk75j"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.113012 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-bk75j"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.119390 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.125449 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.134654 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.139311 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4vtvh"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.150456 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.159786 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.159831 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.159854 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4jdnj"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.170247 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4vtvh"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.262486 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.262732 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener-log" containerID="cri-o://afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.263141 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener" containerID="cri-o://cd65718bfac09f4d934fe1bf3f629f5d852e12343f9a1b480d6984e7497c79aa" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.274208 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4jdnj"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.281917 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.282104 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-78df67c79-bqz8t" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker-log" containerID="cri-o://afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.282447 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-78df67c79-bqz8t" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker" containerID="cri-o://dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.293491 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294431 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294454 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294461 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294469 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294476 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294482 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294488 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294494 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294500 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294506 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294513 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294518 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294524 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294530 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294557 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294575 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294584 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294593 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294602 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294611 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294620 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294629 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294637 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294645 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294654 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294663 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294671 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294679 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.296476 5136 generic.go:334] "Generic (PLEG): container finished" podID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerID="662932f3f7c10a1e5293dce36be1a7b6f6fe4dc40e2e2d324b7c68188c034162" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.296513 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerDied","Data":"662932f3f7c10a1e5293dce36be1a7b6f6fe4dc40e2e2d324b7c68188c034162"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.298326 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8b1461d1-f963-40b0-8cad-a5b2735eedcc/ovsdbserver-nb/0.log" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.298376 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerDied","Data":"11e0a5791b54dfc64b5c868dfb4c7110fa55e59d3ea215d5dd89246b1feeb323"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.298397 5136 scope.go:117] "RemoveContainer" containerID="ad85344499cd2c34ea152d61f6efd8d5a2edf8814c85572a74d76108d11d3655" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.298500 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.307940 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.307967 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2sj8m"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.317531 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd71646c-cb64-4a01-8076-449c812955d5" containerID="14f94b6d1dd07b874e83aed25b1716c42ede7203afe8fc38064921b976f5c65d" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.317650 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerDied","Data":"14f94b6d1dd07b874e83aed25b1716c42ede7203afe8fc38064921b976f5c65d"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.318757 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2sj8m"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.318950 5136 generic.go:334] "Generic (PLEG): container finished" podID="17ad787b-18bc-4afd-840b-2458b494094a" containerID="ef5361e0b73e9c41cc23b5ebe9348fce6a363e59e0bc84a305ad44756dd780af" exitCode=137 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.322657 5136 generic.go:334] "Generic (PLEG): container finished" podID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerID="ddf75c942dcbf834dfd88d5bc8a1e8a0fa00deb6223f84700bb4c75bb0cce612" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.322711 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerDied","Data":"ddf75c942dcbf834dfd88d5bc8a1e8a0fa00deb6223f84700bb4c75bb0cce612"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.327454 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f872c575-a357-4b29-b5e8-cf5dbe6f3d7a/ovsdbserver-sb/0.log" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.327508 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerDied","Data":"58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.327529 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.329438 5136 generic.go:334] "Generic (PLEG): container finished" podID="76d08c01-d488-4f36-9998-7f074633c7c5" containerID="c15b277e6d0d090e0e5755609decc556ab1c1f03a878f14749a60fdfeeec941e" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.329477 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerDied","Data":"c15b277e6d0d090e0e5755609decc556ab1c1f03a878f14749a60fdfeeec941e"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.330412 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzns4" event={"ID":"5d2085e7-db7e-4655-965c-027d03e474e0","Type":"ContainerStarted","Data":"2b8d445e4425096daf41465721adf2ee58e490471ea6782e4e955f4d28582fd2"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.331298 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.331480 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64845646dd-wf28v" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api-log" containerID="cri-o://4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.331833 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64845646dd-wf28v" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api" containerID="cri-o://b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.333663 5136 generic.go:334] "Generic (PLEG): container finished" podID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerID="a74fc94f08f1ff50393fca876adcf8dbd23397ae767f8be9bb23ab400c14c48a" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.333739 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerDied","Data":"a74fc94f08f1ff50393fca876adcf8dbd23397ae767f8be9bb23ab400c14c48a"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.339592 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerID="ccb4f9c0c6dc989c486c61d0a17af6a9e3438c25ae843380545c453141823051" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.339652 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" event={"ID":"ccc70cce-242d-4c99-8d3f-ddb541904e29","Type":"ContainerDied","Data":"ccb4f9c0c6dc989c486c61d0a17af6a9e3438c25ae843380545c453141823051"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.339677 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" event={"ID":"ccc70cce-242d-4c99-8d3f-ddb541904e29","Type":"ContainerDied","Data":"b418e83480ddaf25a5b00d4752775ac00973d088cabf1d27a9b6eceb6bb0b062"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.339688 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b418e83480ddaf25a5b00d4752775ac00973d088cabf1d27a9b6eceb6bb0b062" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.341404 5136 generic.go:334] "Generic (PLEG): container finished" podID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.341437 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerDied","Data":"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.342282 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.342679 5136 generic.go:334] "Generic (PLEG): container finished" podID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerID="f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.342716 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerDied","Data":"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.343761 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vr74x_0ede60bf-5bc5-4267-9849-9389df070048/openstack-network-exporter/0.log" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.343787 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ede60bf-5bc5-4267-9849-9389df070048" containerID="c83952221ac9ae15d237b01aa417d2a8651bd6786c0034250cebe0e17be31690" exitCode=2 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.343875 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vr74x" event={"ID":"0ede60bf-5bc5-4267-9849-9389df070048","Type":"ContainerDied","Data":"c83952221ac9ae15d237b01aa417d2a8651bd6786c0034250cebe0e17be31690"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.343890 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vr74x" event={"ID":"0ede60bf-5bc5-4267-9849-9389df070048","Type":"ContainerDied","Data":"96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.343899 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.345063 5136 generic.go:334] "Generic (PLEG): container finished" podID="af66742a-1452-436f-a22e-7dc277cf690a" containerID="deb975c81a5be70590a5a6b6abaf2e6a45b6848b791c313d96f348b2eb335fa8" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.345883 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerDied","Data":"deb975c81a5be70590a5a6b6abaf2e6a45b6848b791c313d96f348b2eb335fa8"} Mar 20 07:15:49 crc kubenswrapper[5136]: E0320 07:15:49.346694 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:49 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: if [ -n "barbican" ]; then Mar 20 07:15:49 crc kubenswrapper[5136]: GRANT_DATABASE="barbican" Mar 20 07:15:49 crc kubenswrapper[5136]: else Mar 20 07:15:49 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:49 crc kubenswrapper[5136]: fi Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:49 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:49 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:49 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:49 crc kubenswrapper[5136]: # support updates Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:49 crc kubenswrapper[5136]: E0320 07:15:49.348375 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-5429-account-create-update-54j5b" podUID="79272887-6a7f-4336-858a-6844ed6e8a37" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.357232 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rzgpn"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.372529 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rzgpn"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.384666 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.384721 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.384932 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" containerID="cri-o://f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" gracePeriod=30 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.385138 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="38885968-65f8-45e9-8e72-7464d5e78b85" containerName="nova-cell0-conductor-conductor" containerID="cri-o://9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff" gracePeriod=30 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.392648 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhwjx"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.400120 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhwjx"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.412640 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.412868 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a4685f240b41a0ca80c93510a938eb41d236fed91b12082fd48b7f0a68f41629" gracePeriod=30 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.415977 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.419893 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.420376 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" containerID="cri-o://103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" gracePeriod=30 Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.199626 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.200030 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data podName:c355061d-c5fd-4655-aa7e-37b5a40a0400 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:52.200009531 +0000 UTC m=+1584.459320682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data") pod "rabbitmq-cell1-server-0" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400") : configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.218167 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.218903 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.224757 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.202:6080/vnc_lite.html\": dial tcp 10.217.0.202:6080: connect: connection refused" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.226910 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.226950 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.232049 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.252183 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="galera" containerID="cri-o://2f108d33fa61f242b6db0d1497e869fa649b09a841ba1ec8a5200036f1da6f44" gracePeriod=29 Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.256967 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.257175 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="rabbitmq" containerID="cri-o://ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357" gracePeriod=604800 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.269931 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="rabbitmq" containerID="cri-o://b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155" gracePeriod=604800 Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.291279 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.291340 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.292549 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.293890 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.295047 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.295079 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.321129 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.362969 5136 generic.go:334] "Generic (PLEG): container finished" podID="31adef78-59fe-4327-9586-0c12177c7bb7" containerID="46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb" exitCode=0 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.363045 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerDied","Data":"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.367346 5136 generic.go:334] "Generic (PLEG): container finished" podID="63ab8493-eb78-41d9-b368-bba74dc78166" containerID="a4685f240b41a0ca80c93510a938eb41d236fed91b12082fd48b7f0a68f41629" exitCode=0 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.367417 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63ab8493-eb78-41d9-b368-bba74dc78166","Type":"ContainerDied","Data":"a4685f240b41a0ca80c93510a938eb41d236fed91b12082fd48b7f0a68f41629"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.374801 5136 generic.go:334] "Generic (PLEG): container finished" podID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerID="4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536" exitCode=143 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.374873 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerDied","Data":"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.379750 5136 generic.go:334] "Generic (PLEG): container finished" podID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerID="afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec" exitCode=143 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.379840 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerDied","Data":"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.382546 5136 generic.go:334] "Generic (PLEG): container finished" podID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerID="793f74839b7ab06cf4c132cbd574d3bb47712ec9caa473ebbd084643fcce6f31" exitCode=0 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.382570 5136 generic.go:334] "Generic (PLEG): container finished" podID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerID="0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac" exitCode=0 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.382608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerDied","Data":"793f74839b7ab06cf4c132cbd574d3bb47712ec9caa473ebbd084643fcce6f31"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.382630 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerDied","Data":"0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.390150 5136 generic.go:334] "Generic (PLEG): container finished" podID="2a59ab3d-3094-4e10-bbde-44479696f752" containerID="afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997" exitCode=143 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.390202 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerDied","Data":"afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.427340 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e901a54-c442-45fd-a0d8-1568f850efb4" path="/var/lib/kubelet/pods/2e901a54-c442-45fd-a0d8-1568f850efb4/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.427864 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4cd633-e391-4daa-8d31-f9e05afb5fe9" path="/var/lib/kubelet/pods/3e4cd633-e391-4daa-8d31-f9e05afb5fe9/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.428353 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a15871b-0fd2-4db9-a42a-8e822efa35fb" path="/var/lib/kubelet/pods/4a15871b-0fd2-4db9-a42a-8e822efa35fb/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.428869 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4b546d-a206-4e15-b21b-850ef44aac79" path="/var/lib/kubelet/pods/4f4b546d-a206-4e15-b21b-850ef44aac79/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.429878 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52bcca3a-bd10-425e-bc7f-f78c8c4a0271" path="/var/lib/kubelet/pods/52bcca3a-bd10-425e-bc7f-f78c8c4a0271/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.430408 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744eb619-4231-474c-a8b2-a37ed7432086" path="/var/lib/kubelet/pods/744eb619-4231-474c-a8b2-a37ed7432086/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.430914 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" path="/var/lib/kubelet/pods/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.434157 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfbcdb71-4e43-4243-a408-08d69b6d7328" path="/var/lib/kubelet/pods/bfbcdb71-4e43-4243-a408-08d69b6d7328/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.435283 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb3559d-359a-4add-8216-afb68a19e111" path="/var/lib/kubelet/pods/edb3559d-359a-4add-8216-afb68a19e111/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.438141 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfd9851-96cd-483e-9e66-b1cc255cb3e2" path="/var/lib/kubelet/pods/fdfd9851-96cd-483e-9e66-b1cc255cb3e2/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.541872 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f872c575-a357-4b29-b5e8-cf5dbe6f3d7a/ovsdbserver-sb/0.log" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.541985 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.573176 5136 scope.go:117] "RemoveContainer" containerID="c02c0e7e0e6b0a33a002d424a3ac60fcdca9be308ea7764e0da41ae85bb3639a" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.578862 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.585054 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vr74x_0ede60bf-5bc5-4267-9849-9389df070048/openstack-network-exporter/0.log" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.585096 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.591731 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:50 crc kubenswrapper[5136]: W0320 07:15:50.611622 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12a2cdc2_1b05_4bfd_99e9_ce92d81d3af3.slice/crio-c7edf0f3f6556e6164f7e7ded6cdbe431275f40b863ad60fee11457248b95cfe WatchSource:0}: Error finding container c7edf0f3f6556e6164f7e7ded6cdbe431275f40b863ad60fee11457248b95cfe: Status 404 returned error can't find the container with id c7edf0f3f6556e6164f7e7ded6cdbe431275f40b863ad60fee11457248b95cfe Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.622765 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:50 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: if [ -n "nova_api" ]; then Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="nova_api" Mar 20 07:15:50 crc kubenswrapper[5136]: else Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:50 crc kubenswrapper[5136]: fi Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:50 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:50 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:50 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:50 crc kubenswrapper[5136]: # support updates Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.624348 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" podUID="12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.698628 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707114 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnhlv\" (UniqueName: \"kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707158 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707186 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707225 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707252 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707275 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bfpk\" (UniqueName: \"kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707310 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707328 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707359 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707376 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707404 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707434 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707461 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707490 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707514 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd6zc\" (UniqueName: \"kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707529 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707551 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707607 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707639 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.717639 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.718442 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.718935 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.719596 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config" (OuterVolumeSpecName: "config") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.719683 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv" (OuterVolumeSpecName: "kube-api-access-gnhlv") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "kube-api-access-gnhlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.719763 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts" (OuterVolumeSpecName: "scripts") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.723366 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config" (OuterVolumeSpecName: "config") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.747023 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc" (OuterVolumeSpecName: "kube-api-access-hd6zc") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "kube-api-access-hd6zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.747905 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd2bfe2_2220_4617_ac9a_d02f6222cfd0.slice/crio-afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a59ab3d_3094_4e10_bbde_44479696f752.slice/crio-afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4656b3f4_a2bd_4dd9_913c_a4c3a6d6076b.slice/crio-0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d2085e7_db7e_4655_965c_027d03e474e0.slice/crio-221581ba79e5516de8738bb17ad49d51aed46039fd504cffbf7aaf70d3a7d74b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b1461d1_f963_40b0_8cad_a5b2735eedcc.slice\": RecentStats: unable to find data in memory cache]" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.751129 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk" (OuterVolumeSpecName: "kube-api-access-5bfpk") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "kube-api-access-5bfpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.751467 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.775559 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.809665 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret\") pod \"17ad787b-18bc-4afd-840b-2458b494094a\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.809795 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdzch\" (UniqueName: \"kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch\") pod \"17ad787b-18bc-4afd-840b-2458b494094a\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.809899 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config\") pod \"17ad787b-18bc-4afd-840b-2458b494094a\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810103 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle\") pod \"17ad787b-18bc-4afd-840b-2458b494094a\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810665 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810683 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810692 5136 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810729 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810741 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd6zc\" (UniqueName: \"kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810751 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810759 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnhlv\" (UniqueName: \"kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810767 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810775 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810800 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bfpk\" (UniqueName: \"kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810833 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.825467 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch" (OuterVolumeSpecName: "kube-api-access-vdzch") pod "17ad787b-18bc-4afd-840b-2458b494094a" (UID: "17ad787b-18bc-4afd-840b-2458b494094a"). InnerVolumeSpecName "kube-api-access-vdzch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.853117 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.859216 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.870309 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.882686 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config" (OuterVolumeSpecName: "config") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.888122 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.915356 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.915399 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.915413 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.915425 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.915438 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdzch\" (UniqueName: \"kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.919026 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.929434 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17ad787b-18bc-4afd-840b-2458b494094a" (UID: "17ad787b-18bc-4afd-840b-2458b494094a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.934009 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.955032 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.984350 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "17ad787b-18bc-4afd-840b-2458b494094a" (UID: "17ad787b-18bc-4afd-840b-2458b494094a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.987586 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:50 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: if [ -n "nova_cell1" ]; then Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="nova_cell1" Mar 20 07:15:50 crc kubenswrapper[5136]: else Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:50 crc kubenswrapper[5136]: fi Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:50 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:50 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:50 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:50 crc kubenswrapper[5136]: # support updates Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.988374 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:50 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: if [ -n "nova_cell0" ]; then Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="nova_cell0" Mar 20 07:15:50 crc kubenswrapper[5136]: else Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:50 crc kubenswrapper[5136]: fi Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:50 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:50 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:50 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:50 crc kubenswrapper[5136]: # support updates Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.988711 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" podUID="17669c27-ef49-4ced-a620-ef7394f02110" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.990120 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" podUID="6638ac71-bcca-4dbb-9ec3-d9ef0da336db" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.990736 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.992540 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.994288 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:50 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: if [ -n "placement" ]; then Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="placement" Mar 20 07:15:50 crc kubenswrapper[5136]: else Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:50 crc kubenswrapper[5136]: fi Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:50 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:50 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:50 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:50 crc kubenswrapper[5136]: # support updates Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.995413 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.995466 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-a0f6-account-create-update-d5xps" podUID="1490877f-a8fa-4bcd-8c33-be84b9b890aa" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.000233 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.001450 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.008144 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.010161 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.011687 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.020630 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts\") pod \"79272887-6a7f-4336-858a-6844ed6e8a37\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.020933 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgnfv\" (UniqueName: \"kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv\") pod \"79272887-6a7f-4336-858a-6844ed6e8a37\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021446 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79272887-6a7f-4336-858a-6844ed6e8a37" (UID: "79272887-6a7f-4336-858a-6844ed6e8a37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021595 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021619 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021634 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021644 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021654 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021663 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021674 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021684 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021695 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.035121 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv" (OuterVolumeSpecName: "kube-api-access-hgnfv") pod "79272887-6a7f-4336-858a-6844ed6e8a37" (UID: "79272887-6a7f-4336-858a-6844ed6e8a37"). InnerVolumeSpecName "kube-api-access-hgnfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.036039 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "17ad787b-18bc-4afd-840b-2458b494094a" (UID: "17ad787b-18bc-4afd-840b-2458b494094a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.040370 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.047028 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.048851 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.048999 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123011 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123102 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle\") pod \"63ab8493-eb78-41d9-b368-bba74dc78166\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123184 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld2qt\" (UniqueName: \"kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt\") pod \"63ab8493-eb78-41d9-b368-bba74dc78166\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123220 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123245 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs\") pod \"63ab8493-eb78-41d9-b368-bba74dc78166\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123297 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data\") pod \"63ab8493-eb78-41d9-b368-bba74dc78166\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123331 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpskn\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123376 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123408 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123475 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123548 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123574 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123606 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs\") pod \"63ab8493-eb78-41d9-b368-bba74dc78166\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.124172 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.124192 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgnfv\" (UniqueName: \"kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.126411 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.126552 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.134014 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt" (OuterVolumeSpecName: "kube-api-access-ld2qt") pod "63ab8493-eb78-41d9-b368-bba74dc78166" (UID: "63ab8493-eb78-41d9-b368-bba74dc78166"). InnerVolumeSpecName "kube-api-access-ld2qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.135959 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.141955 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn" (OuterVolumeSpecName: "kube-api-access-gpskn") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "kube-api-access-gpskn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.166199 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63ab8493-eb78-41d9-b368-bba74dc78166" (UID: "63ab8493-eb78-41d9-b368-bba74dc78166"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.214341 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data" (OuterVolumeSpecName: "config-data") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.216181 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "63ab8493-eb78-41d9-b368-bba74dc78166" (UID: "63ab8493-eb78-41d9-b368-bba74dc78166"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228032 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpskn\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228070 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228083 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228095 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228108 5136 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228121 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228134 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld2qt\" (UniqueName: \"kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228145 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.260864 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "63ab8493-eb78-41d9-b368-bba74dc78166" (UID: "63ab8493-eb78-41d9-b368-bba74dc78166"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.274113 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data" (OuterVolumeSpecName: "config-data") pod "63ab8493-eb78-41d9-b368-bba74dc78166" (UID: "63ab8493-eb78-41d9-b368-bba74dc78166"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.317127 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.323948 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.329914 5136 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.329946 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.329959 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.329969 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.335990 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.400755 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" event={"ID":"6638ac71-bcca-4dbb-9ec3-d9ef0da336db","Type":"ContainerStarted","Data":"85c5ef70686107412e859254f31a559f15711b7d6e9fc5a62fab2055603accd9"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.405356 5136 generic.go:334] "Generic (PLEG): container finished" podID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerID="2f108d33fa61f242b6db0d1497e869fa649b09a841ba1ec8a5200036f1da6f44" exitCode=0 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.405419 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerDied","Data":"2f108d33fa61f242b6db0d1497e869fa649b09a841ba1ec8a5200036f1da6f44"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.407081 5136 scope.go:117] "RemoveContainer" containerID="ef5361e0b73e9c41cc23b5ebe9348fce6a363e59e0bc84a305ad44756dd780af" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.407745 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.416962 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0f6-account-create-update-d5xps" event={"ID":"1490877f-a8fa-4bcd-8c33-be84b9b890aa","Type":"ContainerStarted","Data":"fedc877299952c2908f9ddcf965bfe8418828d992b0c90e9fc6df145a89c5cf7"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.420513 5136 generic.go:334] "Generic (PLEG): container finished" podID="5d2085e7-db7e-4655-965c-027d03e474e0" containerID="221581ba79e5516de8738bb17ad49d51aed46039fd504cffbf7aaf70d3a7d74b" exitCode=1 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.420580 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzns4" event={"ID":"5d2085e7-db7e-4655-965c-027d03e474e0","Type":"ContainerDied","Data":"221581ba79e5516de8738bb17ad49d51aed46039fd504cffbf7aaf70d3a7d74b"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.421003 5136 scope.go:117] "RemoveContainer" containerID="221581ba79e5516de8738bb17ad49d51aed46039fd504cffbf7aaf70d3a7d74b" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.426014 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" event={"ID":"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3","Type":"ContainerStarted","Data":"c7edf0f3f6556e6164f7e7ded6cdbe431275f40b863ad60fee11457248b95cfe"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.432102 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.436400 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.436428 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerDied","Data":"1d45fa03e9e760b3fecb6f7927ee88ef303052eb3da3a45f5cb31589469d2afb"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.443228 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" event={"ID":"17669c27-ef49-4ced-a620-ef7394f02110","Type":"ContainerStarted","Data":"f713db634b57c569428b9818f10efddf9949e7de62c4b479a6b5e91e44342d03"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.468641 5136 scope.go:117] "RemoveContainer" containerID="793f74839b7ab06cf4c132cbd574d3bb47712ec9caa473ebbd084643fcce6f31" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.504538 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63ab8493-eb78-41d9-b368-bba74dc78166","Type":"ContainerDied","Data":"2da23f701d5f2888a554a42a1e274ecc8ec591c846eec70993fd4a7736e9b816"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.504629 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.517296 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.522947 5136 generic.go:334] "Generic (PLEG): container finished" podID="38885968-65f8-45e9-8e72-7464d5e78b85" containerID="9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff" exitCode=0 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.523121 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38885968-65f8-45e9-8e72-7464d5e78b85","Type":"ContainerDied","Data":"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.523307 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38885968-65f8-45e9-8e72-7464d5e78b85","Type":"ContainerDied","Data":"1952d35f8dd4e8ea612b2d2c603f4623b8d407450e957b72f0fa46a725392225"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.526982 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.532690 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5429-account-create-update-54j5b" event={"ID":"79272887-6a7f-4336-858a-6844ed6e8a37","Type":"ContainerDied","Data":"7d5996405d499205fc914d73a14603978ef7492a89b03a07f615cb81cd56c34d"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.532776 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.536143 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.543620 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.573830 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.625858 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.639402 5136 scope.go:117] "RemoveContainer" containerID="0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.647075 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle\") pod \"38885968-65f8-45e9-8e72-7464d5e78b85\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.647287 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data\") pod \"38885968-65f8-45e9-8e72-7464d5e78b85\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.647369 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j75vn\" (UniqueName: \"kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn\") pod \"38885968-65f8-45e9-8e72-7464d5e78b85\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.715612 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn" (OuterVolumeSpecName: "kube-api-access-j75vn") pod "38885968-65f8-45e9-8e72-7464d5e78b85" (UID: "38885968-65f8-45e9-8e72-7464d5e78b85"). InnerVolumeSpecName "kube-api-access-j75vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.726173 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.726513 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-central-agent" containerID="cri-o://0ecf229966ba8c79d4898c6f188447ddf715aa5d36d580596d93cecd4aca45f3" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.727633 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="proxy-httpd" containerID="cri-o://6f8eb1aeebd08bbda86b110a89e3d6395071812ae31b73c86592f671595b894d" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.727708 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="sg-core" containerID="cri-o://37086a66c3062e12cadb5382a0b51ad5a523fc39db2e404a6feda0518d0eb230" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.727739 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-notification-agent" containerID="cri-o://cf4672bac844a81b21416c7a8623ac1f87041db75209a7a401cb201726b76413" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.750856 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j75vn\" (UniqueName: \"kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.754780 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.755010 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c17493c5-d958-46ab-8e02-d190b2fa6944" containerName="kube-state-metrics" containerID="cri-o://4af2afde3b60e503cf744acf4fb08477b7ec46cb1b30cfb589608690a2df8849" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.770010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38885968-65f8-45e9-8e72-7464d5e78b85" (UID: "38885968-65f8-45e9-8e72-7464d5e78b85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.855479 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.855764 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerName="memcached" containerID="cri-o://184e304c1ac08ec0deea0a800adeaeabbaf3a333a8f4d43b893a721e45afd9b7" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.873017 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e762-account-create-update-5vpcp"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.874796 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.875962 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e762-account-create-update-5vpcp"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.877379 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.886366 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vs5ks"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.901607 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vs5ks"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.915994 5136 scope.go:117] "RemoveContainer" containerID="a4685f240b41a0ca80c93510a938eb41d236fed91b12082fd48b7f0a68f41629" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.923292 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data" (OuterVolumeSpecName: "config-data") pod "38885968-65f8-45e9-8e72-7464d5e78b85" (UID: "38885968-65f8-45e9-8e72-7464d5e78b85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.938530 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e762-account-create-update-l99mm"] Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939378 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939392 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939431 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="ovsdbserver-sb" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939438 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="ovsdbserver-sb" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939457 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-server" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939464 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-server" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939481 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="ovsdbserver-nb" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939487 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="ovsdbserver-nb" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939509 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ede60bf-5bc5-4267-9849-9389df070048" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939515 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ede60bf-5bc5-4267-9849-9389df070048" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939530 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939537 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939558 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="mysql-bootstrap" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939564 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="mysql-bootstrap" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939577 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-httpd" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939585 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-httpd" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939606 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="dnsmasq-dns" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939613 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="dnsmasq-dns" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939634 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="galera" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939641 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="galera" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939660 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="init" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939665 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="init" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939678 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939684 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939702 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38885968-65f8-45e9-8e72-7464d5e78b85" containerName="nova-cell0-conductor-conductor" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939710 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="38885968-65f8-45e9-8e72-7464d5e78b85" containerName="nova-cell0-conductor-conductor" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940062 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940077 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="ovsdbserver-sb" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940084 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="ovsdbserver-nb" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940095 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="dnsmasq-dns" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940113 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-server" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940125 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-httpd" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940138 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="38885968-65f8-45e9-8e72-7464d5e78b85" containerName="nova-cell0-conductor-conductor" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940148 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ede60bf-5bc5-4267-9849-9389df070048" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940162 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940180 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="galera" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940190 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940965 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.942828 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.953017 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.953326 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-766d94c967-pb9qd" podUID="fab90141-26b4-4e46-a916-82190508d6e8" containerName="keystone-api" containerID="cri-o://55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.974047 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xztql"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.982871 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.003496 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e762-account-create-update-l99mm"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.021360 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.052002 5136 scope.go:117] "RemoveContainer" containerID="9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.052589 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xztql"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.084958 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086392 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086453 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086472 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086536 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086569 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086588 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086629 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086659 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pv5f\" (UniqueName: \"kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086807 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086881 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22gkc\" (UniqueName: \"kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.090119 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.090441 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.090978 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.092724 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.093133 5136 scope.go:117] "RemoveContainer" containerID="9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.093687 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.094749 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f" (OuterVolumeSpecName: "kube-api-access-8pv5f") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "kube-api-access-8pv5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.095348 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff\": container with ID starting with 9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff not found: ID does not exist" containerID="9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.095466 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff"} err="failed to get container status \"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff\": rpc error: code = NotFound desc = could not find container \"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff\": container with ID starting with 9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff not found: ID does not exist" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.103793 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.112726 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-kfc9f"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.122419 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e762-account-create-update-l99mm"] Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.122960 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-22gkc operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-e762-account-create-update-l99mm" podUID="d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.130534 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.143156 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-kfc9f"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.151800 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.178118 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.200929 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts\") pod \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201138 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgbc2\" (UniqueName: \"kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2\") pod \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201448 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201601 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22gkc\" (UniqueName: \"kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201881 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201898 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201908 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201917 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pv5f\" (UniqueName: \"kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201926 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201933 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201953 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.202212 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.202284 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:52.702264158 +0000 UTC m=+1584.961575309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.202348 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.202373 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data podName:c355061d-c5fd-4655-aa7e-37b5a40a0400 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:56.202366351 +0000 UTC m=+1588.461677502 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data") pod "rabbitmq-cell1-server-0" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400") : configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.202685 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1490877f-a8fa-4bcd-8c33-be84b9b890aa" (UID: "1490877f-a8fa-4bcd-8c33-be84b9b890aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.210382 5136 projected.go:194] Error preparing data for projected volume kube-api-access-22gkc for pod openstack/keystone-e762-account-create-update-l99mm: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.210461 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:52.710428862 +0000 UTC m=+1584.969740013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-22gkc" (UniqueName: "kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.227920 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.228944 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2" (OuterVolumeSpecName: "kube-api-access-bgbc2") pod "1490877f-a8fa-4bcd-8c33-be84b9b890aa" (UID: "1490877f-a8fa-4bcd-8c33-be84b9b890aa"). InnerVolumeSpecName "kube-api-access-bgbc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.241052 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.300278 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.304682 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.304711 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgbc2\" (UniqueName: \"kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.304722 5136 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.304730 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.318750 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.346167 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.352946 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="galera" containerID="cri-o://bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2" gracePeriod=30 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.362914 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.378511 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.384541 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.391111 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.415568 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0954a67c-5522-4338-b9e6-fc1b35b48cdb" path="/var/lib/kubelet/pods/0954a67c-5522-4338-b9e6-fc1b35b48cdb/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.416657 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ede60bf-5bc5-4267-9849-9389df070048" path="/var/lib/kubelet/pods/0ede60bf-5bc5-4267-9849-9389df070048/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.417938 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ad787b-18bc-4afd-840b-2458b494094a" path="/var/lib/kubelet/pods/17ad787b-18bc-4afd-840b-2458b494094a/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.419438 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" path="/var/lib/kubelet/pods/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.420474 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" path="/var/lib/kubelet/pods/63ab8493-eb78-41d9-b368-bba74dc78166/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.421306 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e22e43-fccc-4ee4-a170-8ff8b9959c1d" path="/var/lib/kubelet/pods/72e22e43-fccc-4ee4-a170-8ff8b9959c1d/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.423741 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79272887-6a7f-4336-858a-6844ed6e8a37" path="/var/lib/kubelet/pods/79272887-6a7f-4336-858a-6844ed6e8a37/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.424434 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e39c5d-af98-44d6-a06d-f31555db758b" path="/var/lib/kubelet/pods/b4e39c5d-af98-44d6-a06d-f31555db758b/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.425575 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e7cfea-b971-447e-a166-20b4827ce7dc" path="/var/lib/kubelet/pods/c7e7cfea-b971-447e-a166-20b4827ce7dc/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.426548 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" path="/var/lib/kubelet/pods/ccc70cce-242d-4c99-8d3f-ddb541904e29/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.428691 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.564548 5136 generic.go:334] "Generic (PLEG): container finished" podID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerID="d7a0a1abaf7649e23b98061506f3cd0ac2d6d6bb3c69694e84fdfcd0ac7ca124" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.564632 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerDied","Data":"d7a0a1abaf7649e23b98061506f3cd0ac2d6d6bb3c69694e84fdfcd0ac7ca124"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.566289 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.566285 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0f6-account-create-update-d5xps" event={"ID":"1490877f-a8fa-4bcd-8c33-be84b9b890aa","Type":"ContainerDied","Data":"fedc877299952c2908f9ddcf965bfe8418828d992b0c90e9fc6df145a89c5cf7"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.569452 5136 generic.go:334] "Generic (PLEG): container finished" podID="af66742a-1452-436f-a22e-7dc277cf690a" containerID="f58e5688042713c0b783865903009ad2d11f4198be5353e16c7c078fdfea1674" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.569528 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerDied","Data":"f58e5688042713c0b783865903009ad2d11f4198be5353e16c7c078fdfea1674"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.571330 5136 generic.go:334] "Generic (PLEG): container finished" podID="5d2085e7-db7e-4655-965c-027d03e474e0" containerID="7950202bc7e7645f213c50f85961805c7e38b2378de8c350f722fea9bf137e17" exitCode=1 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.571354 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzns4" event={"ID":"5d2085e7-db7e-4655-965c-027d03e474e0","Type":"ContainerDied","Data":"7950202bc7e7645f213c50f85961805c7e38b2378de8c350f722fea9bf137e17"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.571397 5136 scope.go:117] "RemoveContainer" containerID="221581ba79e5516de8738bb17ad49d51aed46039fd504cffbf7aaf70d3a7d74b" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.574973 5136 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-mzns4" secret="" err="secret \"galera-openstack-dockercfg-7hd6r\" not found" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.575033 5136 scope.go:117] "RemoveContainer" containerID="7950202bc7e7645f213c50f85961805c7e38b2378de8c350f722fea9bf137e17" Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.575641 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-mzns4_openstack(5d2085e7-db7e-4655-965c-027d03e474e0)\"" pod="openstack/root-account-create-update-mzns4" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.589183 5136 generic.go:334] "Generic (PLEG): container finished" podID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerID="08811e57d5ae08f29bf6ac8aa7f95e929dc4a7c310d13f900fb2c645979418d6" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.589227 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerDied","Data":"08811e57d5ae08f29bf6ac8aa7f95e929dc4a7c310d13f900fb2c645979418d6"} Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.612330 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.612387 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts podName:5d2085e7-db7e-4655-965c-027d03e474e0 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:53.112374691 +0000 UTC m=+1585.371685842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts") pod "root-account-create-update-mzns4" (UID: "5d2085e7-db7e-4655-965c-027d03e474e0") : configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617226 5136 generic.go:334] "Generic (PLEG): container finished" podID="27a464a7-cea7-4265-a264-85a991452e95" containerID="6f8eb1aeebd08bbda86b110a89e3d6395071812ae31b73c86592f671595b894d" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617253 5136 generic.go:334] "Generic (PLEG): container finished" podID="27a464a7-cea7-4265-a264-85a991452e95" containerID="37086a66c3062e12cadb5382a0b51ad5a523fc39db2e404a6feda0518d0eb230" exitCode=2 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617261 5136 generic.go:334] "Generic (PLEG): container finished" podID="27a464a7-cea7-4265-a264-85a991452e95" containerID="0ecf229966ba8c79d4898c6f188447ddf715aa5d36d580596d93cecd4aca45f3" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617301 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerDied","Data":"6f8eb1aeebd08bbda86b110a89e3d6395071812ae31b73c86592f671595b894d"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617328 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerDied","Data":"37086a66c3062e12cadb5382a0b51ad5a523fc39db2e404a6feda0518d0eb230"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617338 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerDied","Data":"0ecf229966ba8c79d4898c6f188447ddf715aa5d36d580596d93cecd4aca45f3"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.620011 5136 generic.go:334] "Generic (PLEG): container finished" podID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerID="3cd1e6e7f78367e01aa376387fc42404408757a25929ca8c639774857d99ccfd" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.620089 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerDied","Data":"3cd1e6e7f78367e01aa376387fc42404408757a25929ca8c639774857d99ccfd"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.622933 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerDied","Data":"8182f12d4de26ad384abd8e2a3a9007acaaad7cd8b7e832cca1481d0c6ef89ef"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.623030 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.640285 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.647008 5136 generic.go:334] "Generic (PLEG): container finished" podID="76d08c01-d488-4f36-9998-7f074633c7c5" containerID="f3795d92724d61612c4e998a075d7ebdd89fee122f5c02cbcebdad3f46cd4b7c" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.647081 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerDied","Data":"f3795d92724d61612c4e998a075d7ebdd89fee122f5c02cbcebdad3f46cd4b7c"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.690114 5136 generic.go:334] "Generic (PLEG): container finished" podID="c17493c5-d958-46ab-8e02-d190b2fa6944" containerID="4af2afde3b60e503cf744acf4fb08477b7ec46cb1b30cfb589608690a2df8849" exitCode=2 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.690158 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c17493c5-d958-46ab-8e02-d190b2fa6944","Type":"ContainerDied","Data":"4af2afde3b60e503cf744acf4fb08477b7ec46cb1b30cfb589608690a2df8849"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.690182 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.690204 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c17493c5-d958-46ab-8e02-d190b2fa6944","Type":"ContainerDied","Data":"634be8a4401daf1087438b6bbc45263ddd43a3a043a4d0fdc9026fb30fdc45cf"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.690218 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="634be8a4401daf1087438b6bbc45263ddd43a3a043a4d0fdc9026fb30fdc45cf" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.691118 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.713212 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf4lm\" (UniqueName: \"kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm\") pod \"c17493c5-d958-46ab-8e02-d190b2fa6944\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.713711 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs\") pod \"c17493c5-d958-46ab-8e02-d190b2fa6944\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.713760 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config\") pod \"c17493c5-d958-46ab-8e02-d190b2fa6944\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.713882 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle\") pod \"c17493c5-d958-46ab-8e02-d190b2fa6944\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.714328 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.714404 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22gkc\" (UniqueName: \"kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.719327 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm" (OuterVolumeSpecName: "kube-api-access-sf4lm") pod "c17493c5-d958-46ab-8e02-d190b2fa6944" (UID: "c17493c5-d958-46ab-8e02-d190b2fa6944"). InnerVolumeSpecName "kube-api-access-sf4lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.720210 5136 projected.go:194] Error preparing data for projected volume kube-api-access-22gkc for pod openstack/keystone-e762-account-create-update-l99mm: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.720283 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:53.720264166 +0000 UTC m=+1585.979575317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-22gkc" (UniqueName: "kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.721612 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.721711 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:53.721687641 +0000 UTC m=+1585.980998792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.734254 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.757573 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.784424 5136 scope.go:117] "RemoveContainer" containerID="2f108d33fa61f242b6db0d1497e869fa649b09a841ba1ec8a5200036f1da6f44" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.794988 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c17493c5-d958-46ab-8e02-d190b2fa6944" (UID: "c17493c5-d958-46ab-8e02-d190b2fa6944"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.796680 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "c17493c5-d958-46ab-8e02-d190b2fa6944" (UID: "c17493c5-d958-46ab-8e02-d190b2fa6944"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.805465 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.814958 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.816413 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h64x\" (UniqueName: \"kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x\") pod \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.816562 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts\") pod \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.816686 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzplt\" (UniqueName: \"kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt\") pod \"17669c27-ef49-4ced-a620-ef7394f02110\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.816762 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts\") pod \"17669c27-ef49-4ced-a620-ef7394f02110\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.817972 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.818470 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf4lm\" (UniqueName: \"kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.818493 5136 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.818505 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.820259 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3" (UID: "12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.820985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17669c27-ef49-4ced-a620-ef7394f02110" (UID: "17669c27-ef49-4ced-a620-ef7394f02110"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.824500 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt" (OuterVolumeSpecName: "kube-api-access-jzplt") pod "17669c27-ef49-4ced-a620-ef7394f02110" (UID: "17669c27-ef49-4ced-a620-ef7394f02110"). InnerVolumeSpecName "kube-api-access-jzplt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.846783 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.853506 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x" (OuterVolumeSpecName: "kube-api-access-5h64x") pod "12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3" (UID: "12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3"). InnerVolumeSpecName "kube-api-access-5h64x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.864287 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "c17493c5-d958-46ab-8e02-d190b2fa6944" (UID: "c17493c5-d958-46ab-8e02-d190b2fa6944"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.871961 5136 scope.go:117] "RemoveContainer" containerID="efcc419ead7f776e9a762552e20519a145846b32963d5cf946e1216d1b57e9d3" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.897313 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.906713 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.909694 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.919113 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.919879 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts\") pod \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920035 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x89p2\" (UniqueName: \"kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2\") pod \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920489 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920554 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h64x\" (UniqueName: \"kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920619 5136 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920695 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920768 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzplt\" (UniqueName: \"kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920707 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6638ac71-bcca-4dbb-9ec3-d9ef0da336db" (UID: "6638ac71-bcca-4dbb-9ec3-d9ef0da336db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.929681 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2" (OuterVolumeSpecName: "kube-api-access-x89p2") pod "6638ac71-bcca-4dbb-9ec3-d9ef0da336db" (UID: "6638ac71-bcca-4dbb-9ec3-d9ef0da336db"). InnerVolumeSpecName "kube-api-access-x89p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.930588 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.947339 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.980231 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.022101 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.022308 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.022853 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.022931 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023021 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krchg\" (UniqueName: \"kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023086 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs\") pod \"af66742a-1452-436f-a22e-7dc277cf690a\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023156 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023225 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023315 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88mxs\" (UniqueName: \"kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs\") pod \"af66742a-1452-436f-a22e-7dc277cf690a\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023530 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023704 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8jnh\" (UniqueName: \"kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023774 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle\") pod \"af66742a-1452-436f-a22e-7dc277cf690a\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023856 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023935 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data\") pod \"af66742a-1452-436f-a22e-7dc277cf690a\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024000 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024070 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs\") pod \"af66742a-1452-436f-a22e-7dc277cf690a\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024140 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024223 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024717 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024781 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x89p2\" (UniqueName: \"kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.025338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs" (OuterVolumeSpecName: "logs") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.028233 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts" (OuterVolumeSpecName: "scripts") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.029185 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs" (OuterVolumeSpecName: "logs") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.030147 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg" (OuterVolumeSpecName: "kube-api-access-krchg") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "kube-api-access-krchg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.030159 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.030539 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs" (OuterVolumeSpecName: "logs") pod "af66742a-1452-436f-a22e-7dc277cf690a" (UID: "af66742a-1452-436f-a22e-7dc277cf690a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.032038 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs" (OuterVolumeSpecName: "kube-api-access-88mxs") pod "af66742a-1452-436f-a22e-7dc277cf690a" (UID: "af66742a-1452-436f-a22e-7dc277cf690a"). InnerVolumeSpecName "kube-api-access-88mxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.080997 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.081517 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh" (OuterVolumeSpecName: "kube-api-access-b8jnh") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "kube-api-access-b8jnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.115313 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data" (OuterVolumeSpecName: "config-data") pod "af66742a-1452-436f-a22e-7dc277cf690a" (UID: "af66742a-1452-436f-a22e-7dc277cf690a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.132927 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133470 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133494 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krchg\" (UniqueName: \"kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133504 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133512 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133520 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88mxs\" (UniqueName: \"kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133528 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133536 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8jnh\" (UniqueName: \"kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133554 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133562 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133572 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.134531 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.134597 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts podName:5d2085e7-db7e-4655-965c-027d03e474e0 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:54.134578179 +0000 UTC m=+1586.393889330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts") pod "root-account-create-update-mzns4" (UID: "5d2085e7-db7e-4655-965c-027d03e474e0") : configmap "openstack-scripts" not found Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.152895 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data" (OuterVolumeSpecName: "config-data") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.163219 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.183794 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.190582 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.197189 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "af66742a-1452-436f-a22e-7dc277cf690a" (UID: "af66742a-1452-436f-a22e-7dc277cf690a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.208899 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.217110 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234702 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234757 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234777 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234822 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl6kj\" (UniqueName: \"kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234845 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234920 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234959 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234976 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235003 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235028 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235065 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235094 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235113 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235139 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235177 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scwjn\" (UniqueName: \"kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235206 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235221 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235575 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235593 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235602 5136 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235612 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235620 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235629 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.236284 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs" (OuterVolumeSpecName: "logs") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.243064 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts" (OuterVolumeSpecName: "scripts") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.251481 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.251847 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs" (OuterVolumeSpecName: "logs") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.252287 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.257739 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data" (OuterVolumeSpecName: "config-data") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.259131 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn" (OuterVolumeSpecName: "kube-api-access-scwjn") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "kube-api-access-scwjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.259327 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.259694 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts" (OuterVolumeSpecName: "scripts") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.259793 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.260881 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj" (OuterVolumeSpecName: "kube-api-access-jl6kj") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "kube-api-access-jl6kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.281083 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.295672 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.336386 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.336577 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: W0320 07:15:53.336729 5136 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/141e5942-2bf9-424c-a6a7-7c93afdad7dc/volumes/kubernetes.io~secret/internal-tls-certs Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.336750 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337105 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337127 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337172 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl6kj\" (UniqueName: \"kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337184 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337196 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337207 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337218 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337228 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337238 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337248 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337260 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scwjn\" (UniqueName: \"kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337270 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337280 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337292 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.374989 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af66742a-1452-436f-a22e-7dc277cf690a" (UID: "af66742a-1452-436f-a22e-7dc277cf690a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.386980 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.388578 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.415493 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.452001 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.455041 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data" (OuterVolumeSpecName: "config-data") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465839 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465870 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465882 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465893 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465903 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465916 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.505369 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.535105 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data" (OuterVolumeSpecName: "config-data") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.567979 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.568003 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.710103 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.710127 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" event={"ID":"6638ac71-bcca-4dbb-9ec3-d9ef0da336db","Type":"ContainerDied","Data":"85c5ef70686107412e859254f31a559f15711b7d6e9fc5a62fab2055603accd9"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.713854 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerDied","Data":"9a85481c71cfaf5395cd3f9b7fc38785dc4f071aee32ec8ab4a9c0c94e256ebb"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.713930 5136 scope.go:117] "RemoveContainer" containerID="08811e57d5ae08f29bf6ac8aa7f95e929dc4a7c310d13f900fb2c645979418d6" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.714164 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.731621 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerDied","Data":"5ace71694fb7cbf300c7ddec51eede62a282d34c72dfdbb59b1c4aee86e0f888"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.731799 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.742716 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" event={"ID":"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3","Type":"ContainerDied","Data":"c7edf0f3f6556e6164f7e7ded6cdbe431275f40b863ad60fee11457248b95cfe"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.742739 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.772157 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22gkc\" (UniqueName: \"kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.772273 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.772409 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.772452 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:55.772438576 +0000 UTC m=+1588.031749727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : configmap "openstack-scripts" not found Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.776397 5136 projected.go:194] Error preparing data for projected volume kube-api-access-22gkc for pod openstack/keystone-e762-account-create-update-l99mm: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.776447 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:55.776434271 +0000 UTC m=+1588.035745422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-22gkc" (UniqueName: "kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.788454 5136 generic.go:334] "Generic (PLEG): container finished" podID="2a59ab3d-3094-4e10-bbde-44479696f752" containerID="cd65718bfac09f4d934fe1bf3f629f5d852e12343f9a1b480d6984e7497c79aa" exitCode=0 Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.788467 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerDied","Data":"cd65718bfac09f4d934fe1bf3f629f5d852e12343f9a1b480d6984e7497c79aa"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.790643 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd71646c-cb64-4a01-8076-449c812955d5" containerID="605e2f1b6fdab04852864ae8ba9a1933cc6fbe478b172080fd10f5d23b52f0fe" exitCode=0 Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.790687 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerDied","Data":"605e2f1b6fdab04852864ae8ba9a1933cc6fbe478b172080fd10f5d23b52f0fe"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.790703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerDied","Data":"456674fb963104b875873a874337c20143adc46f3a809c5e2ae04c7d773c4641"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.790715 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="456674fb963104b875873a874337c20143adc46f3a809c5e2ae04c7d773c4641" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.792235 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" event={"ID":"17669c27-ef49-4ced-a620-ef7394f02110","Type":"ContainerDied","Data":"f713db634b57c569428b9818f10efddf9949e7de62c4b479a6b5e91e44342d03"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.792344 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.801922 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.801918 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerDied","Data":"3e1ef3947decd903ee68750e425aae8b49ed5ff25716e3720a5bb3892e21abbc"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.808722 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerDied","Data":"05a82ec3ba1c0f76f4e62724ce02ba143154e57bc0f0c3ee005d1f4b00278ffd"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.808799 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.816738 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerDied","Data":"aab757c2dc94939d0797a6c8423da847cee9bc95e622fe61ec74ea5928df97cd"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.816861 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.823883 5136 generic.go:334] "Generic (PLEG): container finished" podID="27a464a7-cea7-4265-a264-85a991452e95" containerID="cf4672bac844a81b21416c7a8623ac1f87041db75209a7a401cb201726b76413" exitCode=0 Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.823958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerDied","Data":"cf4672bac844a81b21416c7a8623ac1f87041db75209a7a401cb201726b76413"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.855566 5136 generic.go:334] "Generic (PLEG): container finished" podID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerID="184e304c1ac08ec0deea0a800adeaeabbaf3a333a8f4d43b893a721e45afd9b7" exitCode=0 Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.855632 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"960739f0-c4a5-49c6-8e2a-9452815cf1a9","Type":"ContainerDied","Data":"184e304c1ac08ec0deea0a800adeaeabbaf3a333a8f4d43b893a721e45afd9b7"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.865496 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.865573 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.146332 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.155561 5136 scope.go:117] "RemoveContainer" containerID="ddf75c942dcbf834dfd88d5bc8a1e8a0fa00deb6223f84700bb4c75bb0cce612" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.173671 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.178903 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179045 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktsr8\" (UniqueName: \"kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179113 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179182 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179217 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179237 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179264 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.179768 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.179850 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts podName:5d2085e7-db7e-4655-965c-027d03e474e0 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:56.179832564 +0000 UTC m=+1588.439143715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts") pod "root-account-create-update-mzns4" (UID: "5d2085e7-db7e-4655-965c-027d03e474e0") : configmap "openstack-scripts" not found Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.181170 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs" (OuterVolumeSpecName: "logs") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.195429 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8" (OuterVolumeSpecName: "kube-api-access-ktsr8") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "kube-api-access-ktsr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.196188 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts" (OuterVolumeSpecName: "scripts") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.233347 5136 scope.go:117] "RemoveContainer" containerID="f58e5688042713c0b783865903009ad2d11f4198be5353e16c7c078fdfea1674" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.286719 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs\") pod \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.286946 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data\") pod \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.287068 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config\") pod \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.287174 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmhrz\" (UniqueName: \"kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz\") pod \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.287217 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle\") pod \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.288596 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktsr8\" (UniqueName: \"kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.288628 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.288651 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.292788 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "960739f0-c4a5-49c6-8e2a-9452815cf1a9" (UID: "960739f0-c4a5-49c6-8e2a-9452815cf1a9"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.299810 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.301576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.308576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.316833 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.319941 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data" (OuterVolumeSpecName: "config-data") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.332927 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.333736 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz" (OuterVolumeSpecName: "kube-api-access-lmhrz") pod "960739f0-c4a5-49c6-8e2a-9452815cf1a9" (UID: "960739f0-c4a5-49c6-8e2a-9452815cf1a9"). InnerVolumeSpecName "kube-api-access-lmhrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.336120 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data" (OuterVolumeSpecName: "config-data") pod "960739f0-c4a5-49c6-8e2a-9452815cf1a9" (UID: "960739f0-c4a5-49c6-8e2a-9452815cf1a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.353079 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.388514 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391146 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data\") pod \"2a59ab3d-3094-4e10-bbde-44479696f752\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391218 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctrhs\" (UniqueName: \"kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs\") pod \"2a59ab3d-3094-4e10-bbde-44479696f752\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391241 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391283 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom\") pod \"2a59ab3d-3094-4e10-bbde-44479696f752\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391320 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle\") pod \"2a59ab3d-3094-4e10-bbde-44479696f752\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391384 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391448 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq8fl\" (UniqueName: \"kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391472 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391496 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391512 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391530 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs\") pod \"2a59ab3d-3094-4e10-bbde-44479696f752\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391574 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391591 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.392376 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.392444 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.392497 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmhrz\" (UniqueName: \"kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.392551 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.393197 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.395339 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts" (OuterVolumeSpecName: "scripts") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.396366 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs" (OuterVolumeSpecName: "logs") pod "2a59ab3d-3094-4e10-bbde-44479696f752" (UID: "2a59ab3d-3094-4e10-bbde-44479696f752"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.396416 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.396586 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.422063 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl" (OuterVolumeSpecName: "kube-api-access-zq8fl") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "kube-api-access-zq8fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.479010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2a59ab3d-3094-4e10-bbde-44479696f752" (UID: "2a59ab3d-3094-4e10-bbde-44479696f752"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.479527 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs" (OuterVolumeSpecName: "kube-api-access-ctrhs") pod "2a59ab3d-3094-4e10-bbde-44479696f752" (UID: "2a59ab3d-3094-4e10-bbde-44479696f752"). InnerVolumeSpecName "kube-api-access-ctrhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.482979 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3" path="/var/lib/kubelet/pods/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.483449 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1490877f-a8fa-4bcd-8c33-be84b9b890aa" path="/var/lib/kubelet/pods/1490877f-a8fa-4bcd-8c33-be84b9b890aa/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.483968 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" path="/var/lib/kubelet/pods/210df7e5-1603-40ec-bfa4-7b85525823b3/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.484554 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38885968-65f8-45e9-8e72-7464d5e78b85" path="/var/lib/kubelet/pods/38885968-65f8-45e9-8e72-7464d5e78b85/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.485607 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" path="/var/lib/kubelet/pods/76d08c01-d488-4f36-9998-7f074633c7c5/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.486458 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af66742a-1452-436f-a22e-7dc277cf690a" path="/var/lib/kubelet/pods/af66742a-1452-436f-a22e-7dc277cf690a/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.487651 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" path="/var/lib/kubelet/pods/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495889 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495948 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495960 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctrhs\" (UniqueName: \"kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495969 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495978 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495989 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq8fl\" (UniqueName: \"kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495998 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.510308 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "960739f0-c4a5-49c6-8e2a-9452815cf1a9" (UID: "960739f0-c4a5-49c6-8e2a-9452815cf1a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.525388 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a59ab3d-3094-4e10-bbde-44479696f752" (UID: "2a59ab3d-3094-4e10-bbde-44479696f752"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.543011 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.560985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.578529 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data" (OuterVolumeSpecName: "config-data") pod "2a59ab3d-3094-4e10-bbde-44479696f752" (UID: "2a59ab3d-3094-4e10-bbde-44479696f752"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.581267 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.589027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.589102 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "960739f0-c4a5-49c6-8e2a-9452815cf1a9" (UID: "960739f0-c4a5-49c6-8e2a-9452815cf1a9"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601154 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601229 5136 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601240 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601270 5136 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601280 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601289 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601298 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601307 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.609088 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data" (OuterVolumeSpecName: "config-data") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.612545 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.629711 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663048 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663090 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663103 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663114 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663127 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e762-account-create-update-l99mm"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663137 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e762-account-create-update-l99mm"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663149 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663159 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663174 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663185 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663197 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663212 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663224 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663237 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.671259 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.676532 5136 scope.go:117] "RemoveContainer" containerID="deb975c81a5be70590a5a6b6abaf2e6a45b6848b791c313d96f348b2eb335fa8" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.677211 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7acbc76f-ff83-451e-826f-5fd1f977f74f/ovn-northd/0.log" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.677261 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.700171 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702051 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702081 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702131 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702156 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdvcz\" (UniqueName: \"kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702207 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702228 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702250 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702301 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702319 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702337 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmgbx\" (UniqueName: \"kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702362 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702401 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702435 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702467 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702493 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702809 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702839 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702852 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22gkc\" (UniqueName: \"kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702862 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702874 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.703537 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config" (OuterVolumeSpecName: "config") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.703599 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.703902 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.704184 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts" (OuterVolumeSpecName: "scripts") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.704509 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.704716 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.705069 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.717259 5136 scope.go:117] "RemoveContainer" containerID="d7a0a1abaf7649e23b98061506f3cd0ac2d6d6bb3c69694e84fdfcd0ac7ca124" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.722780 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz" (OuterVolumeSpecName: "kube-api-access-jdvcz") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "kube-api-access-jdvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.731300 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.739572 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx" (OuterVolumeSpecName: "kube-api-access-kmgbx") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "kube-api-access-kmgbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.741293 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.741694 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.745142 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.751009 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.760215 5136 scope.go:117] "RemoveContainer" containerID="a74fc94f08f1ff50393fca876adcf8dbd23397ae767f8be9bb23ab400c14c48a" Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.780696 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.781069 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.781422 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.781450 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.783394 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.787464 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.791986 5136 scope.go:117] "RemoveContainer" containerID="3cd1e6e7f78367e01aa376387fc42404408757a25929ca8c639774857d99ccfd" Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.792084 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.792122 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.794021 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.794047 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.794121 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803658 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803692 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803727 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle\") pod \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803782 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts\") pod \"5d2085e7-db7e-4655-965c-027d03e474e0\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803893 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom\") pod \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803925 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803956 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knrlt\" (UniqueName: \"kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt\") pod \"5d2085e7-db7e-4655-965c-027d03e474e0\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803977 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnkzp\" (UniqueName: \"kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp\") pod \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804007 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzlq9\" (UniqueName: \"kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804030 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs\") pod \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804060 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804090 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804106 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804122 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data\") pod \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804479 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d2085e7-db7e-4655-965c-027d03e474e0" (UID: "5d2085e7-db7e-4655-965c-027d03e474e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804852 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804871 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804883 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804902 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804912 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804922 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804932 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804942 5136 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804951 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804960 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdvcz\" (UniqueName: \"kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804969 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804977 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804986 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804996 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.805004 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.805012 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmgbx\" (UniqueName: \"kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.809740 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9" (OuterVolumeSpecName: "kube-api-access-jzlq9") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "kube-api-access-jzlq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.814741 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs" (OuterVolumeSpecName: "logs") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.814890 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs" (OuterVolumeSpecName: "logs") pod "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" (UID: "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.815853 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt" (OuterVolumeSpecName: "kube-api-access-knrlt") pod "5d2085e7-db7e-4655-965c-027d03e474e0" (UID: "5d2085e7-db7e-4655-965c-027d03e474e0"). InnerVolumeSpecName "kube-api-access-knrlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.818113 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp" (OuterVolumeSpecName: "kube-api-access-gnkzp") pod "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" (UID: "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0"). InnerVolumeSpecName "kube-api-access-gnkzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.821622 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.824108 5136 scope.go:117] "RemoveContainer" containerID="662932f3f7c10a1e5293dce36be1a7b6f6fe4dc40e2e2d324b7c68188c034162" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.825460 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" (UID: "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.833152 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.837784 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" (UID: "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.844059 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.848339 5136 scope.go:117] "RemoveContainer" containerID="f3795d92724d61612c4e998a075d7ebdd89fee122f5c02cbcebdad3f46cd4b7c" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.860719 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data" (OuterVolumeSpecName: "config-data") pod "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" (UID: "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.864830 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.868667 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data" (OuterVolumeSpecName: "config-data") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.869425 5136 scope.go:117] "RemoveContainer" containerID="c15b277e6d0d090e0e5755609decc556ab1c1f03a878f14749a60fdfeeec941e" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.882729 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7acbc76f-ff83-451e-826f-5fd1f977f74f/ovn-northd/0.log" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.882778 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.882780 5136 generic.go:334] "Generic (PLEG): container finished" podID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" exitCode=139 Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.882802 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerDied","Data":"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.883049 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerDied","Data":"f77fa3b0a75190383cf99cb089377cd7d03639ec4bda09b8550cc55a30016174"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.882881 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.889797 5136 generic.go:334] "Generic (PLEG): container finished" podID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerID="b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd" exitCode=0 Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.889884 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerDied","Data":"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.889906 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerDied","Data":"a17e5911dd44a70eaddf965e56b21ab05149f56b96de7f20bf6f4c657c514884"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.889970 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.902748 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.902773 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerDied","Data":"adeda094c452ea454b57acc36b81655df6fbdea86bb257845c27b0e1e0656a6f"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906420 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906442 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906451 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906460 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knrlt\" (UniqueName: \"kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906469 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnkzp\" (UniqueName: \"kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906477 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzlq9\" (UniqueName: \"kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906485 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906494 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906503 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906510 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906518 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906526 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906533 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906540 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.907269 5136 scope.go:117] "RemoveContainer" containerID="e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.928374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"960739f0-c4a5-49c6-8e2a-9452815cf1a9","Type":"ContainerDied","Data":"c417f48a18610bbcb3a324c0dd0cc757f54ca7629176ea2441d7c501e41142ce"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.928493 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.937147 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.951987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerDied","Data":"a213c0799494e4283f552e4529c929904c7d07c101510facaefb1e2a3e99ab9c"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.952216 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.956085 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzns4" event={"ID":"5d2085e7-db7e-4655-965c-027d03e474e0","Type":"ContainerDied","Data":"2b8d445e4425096daf41465721adf2ee58e490471ea6782e4e955f4d28582fd2"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.956216 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.958593 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerDied","Data":"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.958652 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.958714 5136 generic.go:334] "Generic (PLEG): container finished" podID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerID="bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2" exitCode=0 Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.958843 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerDied","Data":"472fbef87977bdfc11603315d743a17729300016a4f32222d159ed871e8ca38d"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.973681 5136 scope.go:117] "RemoveContainer" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.974760 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.978547 5136 generic.go:334] "Generic (PLEG): container finished" podID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerID="dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0" exitCode=0 Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.978649 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.981967 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.981992 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerDied","Data":"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.982053 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerDied","Data":"87dc6bb8fc1b9abd24b71389abdb4a22e7af9a9d787041070ce4c3a66cfdd142"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.985632 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.995015 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.013109 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.022147 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.026624 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.029190 5136 scope.go:117] "RemoveContainer" containerID="e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.031683 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf\": container with ID starting with e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf not found: ID does not exist" containerID="e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.031801 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf"} err="failed to get container status \"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf\": rpc error: code = NotFound desc = could not find container \"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf\": container with ID starting with e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.031907 5136 scope.go:117] "RemoveContainer" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.032297 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242\": container with ID starting with 91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242 not found: ID does not exist" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.032338 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242"} err="failed to get container status \"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242\": rpc error: code = NotFound desc = could not find container \"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242\": container with ID starting with 91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242 not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.032366 5136 scope.go:117] "RemoveContainer" containerID="b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.036339 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.044144 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.055554 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.069548 5136 scope.go:117] "RemoveContainer" containerID="4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.071354 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.078024 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.090021 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.092083 5136 scope.go:117] "RemoveContainer" containerID="b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.092509 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd\": container with ID starting with b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd not found: ID does not exist" containerID="b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.092541 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd"} err="failed to get container status \"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd\": rpc error: code = NotFound desc = could not find container \"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd\": container with ID starting with b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.092561 5136 scope.go:117] "RemoveContainer" containerID="4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.092867 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536\": container with ID starting with 4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536 not found: ID does not exist" containerID="4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.092888 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536"} err="failed to get container status \"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536\": rpc error: code = NotFound desc = could not find container \"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536\": container with ID starting with 4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536 not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.092901 5136 scope.go:117] "RemoveContainer" containerID="cd65718bfac09f4d934fe1bf3f629f5d852e12343f9a1b480d6984e7497c79aa" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.105391 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.110859 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.116771 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.121629 5136 scope.go:117] "RemoveContainer" containerID="afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.121778 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.126596 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.144263 5136 scope.go:117] "RemoveContainer" containerID="184e304c1ac08ec0deea0a800adeaeabbaf3a333a8f4d43b893a721e45afd9b7" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.215376 5136 scope.go:117] "RemoveContainer" containerID="6f8eb1aeebd08bbda86b110a89e3d6395071812ae31b73c86592f671595b894d" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.286443 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.297220 5136 scope.go:117] "RemoveContainer" containerID="37086a66c3062e12cadb5382a0b51ad5a523fc39db2e404a6feda0518d0eb230" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.308693 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.310616 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.310680 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.315262 5136 scope.go:117] "RemoveContainer" containerID="cf4672bac844a81b21416c7a8623ac1f87041db75209a7a401cb201726b76413" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.341873 5136 scope.go:117] "RemoveContainer" containerID="0ecf229966ba8c79d4898c6f188447ddf715aa5d36d580596d93cecd4aca45f3" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.368377 5136 scope.go:117] "RemoveContainer" containerID="7950202bc7e7645f213c50f85961805c7e38b2378de8c350f722fea9bf137e17" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.397994 5136 scope.go:117] "RemoveContainer" containerID="bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.450624 5136 scope.go:117] "RemoveContainer" containerID="1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.474028 5136 scope.go:117] "RemoveContainer" containerID="bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.475172 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2\": container with ID starting with bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2 not found: ID does not exist" containerID="bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.475201 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2"} err="failed to get container status \"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2\": rpc error: code = NotFound desc = could not find container \"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2\": container with ID starting with bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2 not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.475220 5136 scope.go:117] "RemoveContainer" containerID="1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.475562 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080\": container with ID starting with 1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080 not found: ID does not exist" containerID="1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.475583 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080"} err="failed to get container status \"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080\": rpc error: code = NotFound desc = could not find container \"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080\": container with ID starting with 1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080 not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.475594 5136 scope.go:117] "RemoveContainer" containerID="dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.505896 5136 scope.go:117] "RemoveContainer" containerID="afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.532438 5136 scope.go:117] "RemoveContainer" containerID="dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.538221 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0\": container with ID starting with dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0 not found: ID does not exist" containerID="dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.538261 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0"} err="failed to get container status \"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0\": rpc error: code = NotFound desc = could not find container \"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0\": container with ID starting with dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0 not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.538282 5136 scope.go:117] "RemoveContainer" containerID="afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.539578 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec\": container with ID starting with afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec not found: ID does not exist" containerID="afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.539622 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec"} err="failed to get container status \"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec\": rpc error: code = NotFound desc = could not find container \"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec\": container with ID starting with afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.578058 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.636960 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637009 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-698x2\" (UniqueName: \"kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637028 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637046 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637065 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637100 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637118 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637210 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.641440 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts" (OuterVolumeSpecName: "scripts") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.641487 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2" (OuterVolumeSpecName: "kube-api-access-698x2") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "kube-api-access-698x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.642015 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.642542 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.658152 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.659502 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data" (OuterVolumeSpecName: "config-data") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.678121 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.684549 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738607 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738640 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738653 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-698x2\" (UniqueName: \"kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738661 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738669 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738677 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738685 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738693 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.786325 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" probeResult="failure" output="command timed out" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.823083 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" probeResult="failure" output=< Mar 20 07:15:55 crc kubenswrapper[5136]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Mar 20 07:15:55 crc kubenswrapper[5136]: > Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.994255 5136 generic.go:334] "Generic (PLEG): container finished" podID="fab90141-26b4-4e46-a916-82190508d6e8" containerID="55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e" exitCode=0 Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.994305 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.994324 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766d94c967-pb9qd" event={"ID":"fab90141-26b4-4e46-a916-82190508d6e8","Type":"ContainerDied","Data":"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e"} Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.994376 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766d94c967-pb9qd" event={"ID":"fab90141-26b4-4e46-a916-82190508d6e8","Type":"ContainerDied","Data":"c19785656f47dd95cc1a27542636229f68d56209966c28654c4de9baa2a90613"} Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.994399 5136 scope.go:117] "RemoveContainer" containerID="55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.028574 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.032216 5136 scope.go:117] "RemoveContainer" containerID="55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e" Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.033080 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e\": container with ID starting with 55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e not found: ID does not exist" containerID="55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.033119 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e"} err="failed to get container status \"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e\": rpc error: code = NotFound desc = could not find container \"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e\": container with ID starting with 55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e not found: ID does not exist" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.034883 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.035657 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.036837 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.037778 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.037839 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.244783 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.244875 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data podName:c355061d-c5fd-4655-aa7e-37b5a40a0400 nodeName:}" failed. No retries permitted until 2026-03-20 07:16:04.244859888 +0000 UTC m=+1596.504171039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data") pod "rabbitmq-cell1-server-0" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400") : configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.409928 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" path="/var/lib/kubelet/pods/141e5942-2bf9-424c-a6a7-7c93afdad7dc/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.410800 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17669c27-ef49-4ced-a620-ef7394f02110" path="/var/lib/kubelet/pods/17669c27-ef49-4ced-a620-ef7394f02110/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.411373 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" path="/var/lib/kubelet/pods/23c10323-3c49-4f00-8bf7-319e6f5834d0/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.412714 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a464a7-cea7-4265-a264-85a991452e95" path="/var/lib/kubelet/pods/27a464a7-cea7-4265-a264-85a991452e95/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.413588 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" path="/var/lib/kubelet/pods/2a59ab3d-3094-4e10-bbde-44479696f752/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.414802 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" path="/var/lib/kubelet/pods/5d2085e7-db7e-4655-965c-027d03e474e0/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.415394 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6638ac71-bcca-4dbb-9ec3-d9ef0da336db" path="/var/lib/kubelet/pods/6638ac71-bcca-4dbb-9ec3-d9ef0da336db/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.415860 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" path="/var/lib/kubelet/pods/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.416626 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" path="/var/lib/kubelet/pods/7acbc76f-ff83-451e-826f-5fd1f977f74f/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.418035 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" path="/var/lib/kubelet/pods/960739f0-c4a5-49c6-8e2a-9452815cf1a9/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.418673 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" path="/var/lib/kubelet/pods/9dc2d320-2468-4a45-ba6b-69ea478b5e8c/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.419960 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd71646c-cb64-4a01-8076-449c812955d5" path="/var/lib/kubelet/pods/bd71646c-cb64-4a01-8076-449c812955d5/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.420683 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17493c5-d958-46ab-8e02-d190b2fa6944" path="/var/lib/kubelet/pods/c17493c5-d958-46ab-8e02-d190b2fa6944/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.421208 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7" path="/var/lib/kubelet/pods/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.421597 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" path="/var/lib/kubelet/pods/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.422832 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab90141-26b4-4e46-a916-82190508d6e8" path="/var/lib/kubelet/pods/fab90141-26b4-4e46-a916-82190508d6e8/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.423477 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" path="/var/lib/kubelet/pods/fe20adf9-d6e2-4487-a176-32ddd55eb051/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.802346 5136 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 20 07:15:56 crc kubenswrapper[5136]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-03-20T07:15:49Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 20 07:15:56 crc kubenswrapper[5136]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Mar 20 07:15:56 crc kubenswrapper[5136]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-gnwt6" message=< Mar 20 07:15:56 crc kubenswrapper[5136]: Exiting ovn-controller (1) [FAILED] Mar 20 07:15:56 crc kubenswrapper[5136]: Killing ovn-controller (1) [ OK ] Mar 20 07:15:56 crc kubenswrapper[5136]: 2026-03-20T07:15:49Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 20 07:15:56 crc kubenswrapper[5136]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Mar 20 07:15:56 crc kubenswrapper[5136]: > Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.803986 5136 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 20 07:15:56 crc kubenswrapper[5136]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-03-20T07:15:49Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 20 07:15:56 crc kubenswrapper[5136]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Mar 20 07:15:56 crc kubenswrapper[5136]: > pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" containerID="cri-o://a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.804034 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" containerID="cri-o://a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff" gracePeriod=22 Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.933903 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.934636 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957322 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957374 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957420 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957448 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957470 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957498 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957523 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957542 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skh55\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957571 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957588 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957610 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957630 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p49dt\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957657 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957683 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957713 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957743 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957768 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957847 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957847 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957874 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957895 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957911 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957938 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.958177 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.958210 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.958620 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.959403 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.960128 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.973255 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.981343 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt" (OuterVolumeSpecName: "kube-api-access-p49dt") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "kube-api-access-p49dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.981521 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info" (OuterVolumeSpecName: "pod-info") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.982355 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55" (OuterVolumeSpecName: "kube-api-access-skh55") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "kube-api-access-skh55". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.983034 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.983209 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info" (OuterVolumeSpecName: "pod-info") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.983358 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.985152 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.986864 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.005055 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.008937 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.017144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data" (OuterVolumeSpecName: "config-data") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.028525 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data" (OuterVolumeSpecName: "config-data") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.032906 5136 generic.go:334] "Generic (PLEG): container finished" podID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerID="ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357" exitCode=0 Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.033046 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerDied","Data":"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357"} Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.033074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerDied","Data":"22b2668fe332b62f7864af2d759b5866cf033333320267d52cb7cec04a426bd9"} Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.033069 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.033111 5136 scope.go:117] "RemoveContainer" containerID="ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.035432 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gnwt6_04ee32c0-35eb-488d-b166-0ad8a8d09f48/ovn-controller/0.log" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.035465 5136 generic.go:334] "Generic (PLEG): container finished" podID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerID="a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff" exitCode=139 Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.035515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6" event={"ID":"04ee32c0-35eb-488d-b166-0ad8a8d09f48","Type":"ContainerDied","Data":"a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff"} Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.053393 5136 generic.go:334] "Generic (PLEG): container finished" podID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerID="b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155" exitCode=0 Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.053438 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerDied","Data":"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155"} Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.053465 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerDied","Data":"3dd70fd22c8a29190bae59972f90dd4530137cb93e7e5a8ebd5a576dd4e2a33b"} Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.053537 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.053956 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf" (OuterVolumeSpecName: "server-conf") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.058269 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf" (OuterVolumeSpecName: "server-conf") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059110 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059129 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059139 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059167 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059176 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059186 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059200 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059209 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059217 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059225 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059232 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059240 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059248 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skh55\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059256 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059264 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059272 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059280 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p49dt\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059288 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059297 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.061546 5136 scope.go:117] "RemoveContainer" containerID="746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.073702 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.073923 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.074509 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.081946 5136 scope.go:117] "RemoveContainer" containerID="ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357" Mar 20 07:15:57 crc kubenswrapper[5136]: E0320 07:15:57.082655 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357\": container with ID starting with ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357 not found: ID does not exist" containerID="ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.082700 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357"} err="failed to get container status \"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357\": rpc error: code = NotFound desc = could not find container \"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357\": container with ID starting with ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357 not found: ID does not exist" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.082728 5136 scope.go:117] "RemoveContainer" containerID="746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71" Mar 20 07:15:57 crc kubenswrapper[5136]: E0320 07:15:57.084337 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71\": container with ID starting with 746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71 not found: ID does not exist" containerID="746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.084374 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71"} err="failed to get container status \"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71\": rpc error: code = NotFound desc = could not find container \"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71\": container with ID starting with 746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71 not found: ID does not exist" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.084398 5136 scope.go:117] "RemoveContainer" containerID="b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.099468 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.100850 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gnwt6_04ee32c0-35eb-488d-b166-0ad8a8d09f48/ovn-controller/0.log" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.100909 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.101294 5136 scope.go:117] "RemoveContainer" containerID="3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.120981 5136 scope.go:117] "RemoveContainer" containerID="b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155" Mar 20 07:15:57 crc kubenswrapper[5136]: E0320 07:15:57.121431 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155\": container with ID starting with b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155 not found: ID does not exist" containerID="b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.121460 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155"} err="failed to get container status \"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155\": rpc error: code = NotFound desc = could not find container \"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155\": container with ID starting with b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155 not found: ID does not exist" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.121482 5136 scope.go:117] "RemoveContainer" containerID="3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29" Mar 20 07:15:57 crc kubenswrapper[5136]: E0320 07:15:57.121681 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29\": container with ID starting with 3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29 not found: ID does not exist" containerID="3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.121704 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29"} err="failed to get container status \"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29\": rpc error: code = NotFound desc = could not find container \"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29\": container with ID starting with 3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29 not found: ID does not exist" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160090 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160151 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160173 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhfl7\" (UniqueName: \"kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160255 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160299 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160871 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160927 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160954 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161167 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run" (OuterVolumeSpecName: "var-run") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161219 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161359 5136 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161501 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161569 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161504 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts" (OuterVolumeSpecName: "scripts") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161646 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161762 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.163520 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7" (OuterVolumeSpecName: "kube-api-access-xhfl7") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "kube-api-access-xhfl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.176651 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.213757 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262925 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262948 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhfl7\" (UniqueName: \"kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262958 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262968 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262976 5136 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262984 5136 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.380213 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.397071 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.410174 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.416994 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.078287 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gnwt6_04ee32c0-35eb-488d-b166-0ad8a8d09f48/ovn-controller/0.log" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.078394 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6" event={"ID":"04ee32c0-35eb-488d-b166-0ad8a8d09f48","Type":"ContainerDied","Data":"969e50d91cdce234e3ebd25af89de94a9345b9463c4d70197f2dbbaa911c914f"} Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.078432 5136 scope.go:117] "RemoveContainer" containerID="a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.078437 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.083320 5136 generic.go:334] "Generic (PLEG): container finished" podID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" exitCode=0 Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.083362 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f2e8f54f-5434-4cf0-94b9-38648bf7ba77","Type":"ContainerDied","Data":"103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5"} Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.087262 5136 generic.go:334] "Generic (PLEG): container finished" podID="52463352-7504-47a4-92e5-d672bab85574" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" exitCode=0 Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.087296 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"52463352-7504-47a4-92e5-d672bab85574","Type":"ContainerDied","Data":"f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9"} Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.132873 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.141996 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.158750 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.169:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.239736 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.251891 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403325 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data\") pod \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403436 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle\") pod \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403765 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data\") pod \"52463352-7504-47a4-92e5-d672bab85574\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403844 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzjvz\" (UniqueName: \"kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz\") pod \"52463352-7504-47a4-92e5-d672bab85574\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403871 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle\") pod \"52463352-7504-47a4-92e5-d672bab85574\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403906 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbvkl\" (UniqueName: \"kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl\") pod \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.405010 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" path="/var/lib/kubelet/pods/04ee32c0-35eb-488d-b166-0ad8a8d09f48/volumes" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.405912 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" path="/var/lib/kubelet/pods/261514f8-7734-423d-b15a-e83fdc2a85fd/volumes" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.407374 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" path="/var/lib/kubelet/pods/c355061d-c5fd-4655-aa7e-37b5a40a0400/volumes" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.408487 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz" (OuterVolumeSpecName: "kube-api-access-dzjvz") pod "52463352-7504-47a4-92e5-d672bab85574" (UID: "52463352-7504-47a4-92e5-d672bab85574"). InnerVolumeSpecName "kube-api-access-dzjvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.418257 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl" (OuterVolumeSpecName: "kube-api-access-vbvkl") pod "f2e8f54f-5434-4cf0-94b9-38648bf7ba77" (UID: "f2e8f54f-5434-4cf0-94b9-38648bf7ba77"). InnerVolumeSpecName "kube-api-access-vbvkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.429572 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data" (OuterVolumeSpecName: "config-data") pod "f2e8f54f-5434-4cf0-94b9-38648bf7ba77" (UID: "f2e8f54f-5434-4cf0-94b9-38648bf7ba77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.429591 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52463352-7504-47a4-92e5-d672bab85574" (UID: "52463352-7504-47a4-92e5-d672bab85574"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.430189 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2e8f54f-5434-4cf0-94b9-38648bf7ba77" (UID: "f2e8f54f-5434-4cf0-94b9-38648bf7ba77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.433088 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data" (OuterVolumeSpecName: "config-data") pod "52463352-7504-47a4-92e5-d672bab85574" (UID: "52463352-7504-47a4-92e5-d672bab85574"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505136 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505163 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505173 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzjvz\" (UniqueName: \"kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505186 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505194 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbvkl\" (UniqueName: \"kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505203 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.685374 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.107:11211: i/o timeout" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.083018 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64845646dd-wf28v" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.083062 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64845646dd-wf28v" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.097204 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f2e8f54f-5434-4cf0-94b9-38648bf7ba77","Type":"ContainerDied","Data":"cc1f222689540ab41cb0293a65f9305d971ad1f909b8b87d7d0b7c47db1a4f3a"} Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.097248 5136 scope.go:117] "RemoveContainer" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.097243 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.098807 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.098823 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"52463352-7504-47a4-92e5-d672bab85574","Type":"ContainerDied","Data":"d25cabad936d4a8da77263639f37547fcf3ffbbafde65e2d7285a8e382e5513c"} Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.118243 5136 scope.go:117] "RemoveContainer" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.153397 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.170236 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.178101 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.183909 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.779892 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.780161 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.780425 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.780446 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.780826 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.781758 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.782780 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.782804 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127471 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566516-2dnr7"] Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127756 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127772 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127783 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127790 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127797 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127804 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127821 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab90141-26b4-4e46-a916-82190508d6e8" containerName="keystone-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127827 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab90141-26b4-4e46-a916-82190508d6e8" containerName="keystone-api" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127838 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127844 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127852 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-central-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127857 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-central-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127868 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127874 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127885 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="mysql-bootstrap" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127893 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="mysql-bootstrap" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127904 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="galera" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127910 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="galera" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127920 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerName="memcached" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127926 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerName="memcached" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127934 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-metadata" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127940 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-metadata" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127952 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127959 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127969 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="openstack-network-exporter" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127975 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="openstack-network-exporter" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127986 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127994 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-api" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128003 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128010 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128022 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-notification-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128030 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-notification-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128041 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128048 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128058 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="setup-container" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128066 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="setup-container" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128078 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128085 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128101 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128109 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128119 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128126 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128134 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128141 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128148 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128155 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128164 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128170 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128178 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="ovn-northd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128184 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="ovn-northd" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128192 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17493c5-d958-46ab-8e02-d190b2fa6944" containerName="kube-state-metrics" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128198 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17493c5-d958-46ab-8e02-d190b2fa6944" containerName="kube-state-metrics" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128207 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="sg-core" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128213 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="sg-core" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128227 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128232 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128240 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128247 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128257 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128264 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128274 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="proxy-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128280 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="proxy-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128291 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128297 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-api" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128306 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="setup-container" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128311 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="setup-container" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128321 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128327 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128337 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128342 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128352 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128357 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128367 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128374 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128517 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128527 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128537 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="galera" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128547 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128557 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128566 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128575 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128585 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="sg-core" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128592 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128600 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128611 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128623 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128633 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128642 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="openstack-network-exporter" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128653 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128661 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-central-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128671 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128678 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128684 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128693 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="ovn-northd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128700 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128709 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128721 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128728 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128736 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-notification-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128743 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128752 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab90141-26b4-4e46-a916-82190508d6e8" containerName="keystone-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128760 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128768 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128776 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17493c5-d958-46ab-8e02-d190b2fa6944" containerName="kube-state-metrics" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128785 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerName="memcached" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128795 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="proxy-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128804 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128823 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-metadata" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.129273 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.130553 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.132727 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.132874 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.134591 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566516-2dnr7"] Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.230534 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbdkf\" (UniqueName: \"kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf\") pod \"auto-csr-approver-29566516-2dnr7\" (UID: \"3a1410b1-69b7-42b6-85c9-967dbbc05b08\") " pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.332087 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbdkf\" (UniqueName: \"kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf\") pod \"auto-csr-approver-29566516-2dnr7\" (UID: \"3a1410b1-69b7-42b6-85c9-967dbbc05b08\") " pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.352090 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbdkf\" (UniqueName: \"kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf\") pod \"auto-csr-approver-29566516-2dnr7\" (UID: \"3a1410b1-69b7-42b6-85c9-967dbbc05b08\") " pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.415405 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52463352-7504-47a4-92e5-d672bab85574" path="/var/lib/kubelet/pods/52463352-7504-47a4-92e5-d672bab85574/volumes" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.417290 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" path="/var/lib/kubelet/pods/f2e8f54f-5434-4cf0-94b9-38648bf7ba77/volumes" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.447279 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.885157 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566516-2dnr7"] Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.890117 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:16:01 crc kubenswrapper[5136]: I0320 07:16:01.122326 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" event={"ID":"3a1410b1-69b7-42b6-85c9-967dbbc05b08","Type":"ContainerStarted","Data":"cc1855cd77ceffda8a136d53ffe756e3172dcb6e0e61666af9010486d1f9e14d"} Mar 20 07:16:03 crc kubenswrapper[5136]: I0320 07:16:03.101320 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6ff4f58fb9-7gtff" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": dial tcp 10.217.0.159:9696: connect: connection refused" Mar 20 07:16:03 crc kubenswrapper[5136]: I0320 07:16:03.146193 5136 generic.go:334] "Generic (PLEG): container finished" podID="3a1410b1-69b7-42b6-85c9-967dbbc05b08" containerID="71a9b19bcf4bcf8c4a69410e7ffac0d108a4db9d76a7cd352479549f5c15e6f8" exitCode=0 Mar 20 07:16:03 crc kubenswrapper[5136]: I0320 07:16:03.146243 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" event={"ID":"3a1410b1-69b7-42b6-85c9-967dbbc05b08","Type":"ContainerDied","Data":"71a9b19bcf4bcf8c4a69410e7ffac0d108a4db9d76a7cd352479549f5c15e6f8"} Mar 20 07:16:04 crc kubenswrapper[5136]: I0320 07:16:04.546773 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:04 crc kubenswrapper[5136]: I0320 07:16:04.704706 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbdkf\" (UniqueName: \"kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf\") pod \"3a1410b1-69b7-42b6-85c9-967dbbc05b08\" (UID: \"3a1410b1-69b7-42b6-85c9-967dbbc05b08\") " Mar 20 07:16:04 crc kubenswrapper[5136]: I0320 07:16:04.722722 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf" (OuterVolumeSpecName: "kube-api-access-bbdkf") pod "3a1410b1-69b7-42b6-85c9-967dbbc05b08" (UID: "3a1410b1-69b7-42b6-85c9-967dbbc05b08"). InnerVolumeSpecName "kube-api-access-bbdkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.779366 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.779831 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.780327 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.780393 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.785978 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.787250 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.790907 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.790939 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:04 crc kubenswrapper[5136]: I0320 07:16:04.807215 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbdkf\" (UniqueName: \"kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.048553 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.166896 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" event={"ID":"3a1410b1-69b7-42b6-85c9-967dbbc05b08","Type":"ContainerDied","Data":"cc1855cd77ceffda8a136d53ffe756e3172dcb6e0e61666af9010486d1f9e14d"} Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.166934 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc1855cd77ceffda8a136d53ffe756e3172dcb6e0e61666af9010486d1f9e14d" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.166942 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.168523 5136 generic.go:334] "Generic (PLEG): container finished" podID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerID="8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153" exitCode=0 Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.168559 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerDied","Data":"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153"} Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.168583 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerDied","Data":"6635a1786b5854425a3f89e3fd4433884c9eeba6fdc2878722b9acdad452ee38"} Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.168600 5136 scope.go:117] "RemoveContainer" containerID="f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.168608 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.189993 5136 scope.go:117] "RemoveContainer" containerID="8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.211477 5136 scope.go:117] "RemoveContainer" containerID="f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685" Mar 20 07:16:05 crc kubenswrapper[5136]: E0320 07:16:05.211883 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685\": container with ID starting with f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685 not found: ID does not exist" containerID="f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.211943 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685"} err="failed to get container status \"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685\": rpc error: code = NotFound desc = could not find container \"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685\": container with ID starting with f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685 not found: ID does not exist" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.211965 5136 scope.go:117] "RemoveContainer" containerID="8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213074 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213122 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj27q\" (UniqueName: \"kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: E0320 07:16:05.213084 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153\": container with ID starting with 8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153 not found: ID does not exist" containerID="8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213176 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153"} err="failed to get container status \"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153\": rpc error: code = NotFound desc = could not find container \"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153\": container with ID starting with 8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153 not found: ID does not exist" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213189 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213263 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213292 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213343 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213366 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.217839 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.218028 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q" (OuterVolumeSpecName: "kube-api-access-mj27q") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "kube-api-access-mj27q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.251183 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.251955 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config" (OuterVolumeSpecName: "config") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.252000 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.252618 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.275037 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.314942 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.314991 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.315003 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.315012 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj27q\" (UniqueName: \"kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.315023 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.315031 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.315039 5136 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.564110 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.579240 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.607582 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566510-bn9cf"] Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.612757 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566510-bn9cf"] Mar 20 07:16:06 crc kubenswrapper[5136]: I0320 07:16:06.406660 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17242c2e-8526-49cf-89dd-e35bd97c6626" path="/var/lib/kubelet/pods/17242c2e-8526-49cf-89dd-e35bd97c6626/volumes" Mar 20 07:16:06 crc kubenswrapper[5136]: I0320 07:16:06.407773 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" path="/var/lib/kubelet/pods/5c52887a-70a8-4d00-a1f9-a5677fa48d1f/volumes" Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.778699 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.780027 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.780447 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.780535 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.781852 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.783797 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.785837 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.785875 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.778872 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.779603 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.779848 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.779872 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.780250 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.781502 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.782724 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.782793 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:18 crc kubenswrapper[5136]: I0320 07:16:18.927930 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:16:18 crc kubenswrapper[5136]: I0320 07:16:18.928207 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.008222 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ldp4w_5f48f721-42c9-4f2b-a461-2ad47a1dea3d/ovs-vswitchd/0.log" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.008972 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025660 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025736 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-482rz\" (UniqueName: \"kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025755 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77dwk\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025998 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026026 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026187 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026227 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026243 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026284 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqmjf\" (UniqueName: \"kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026303 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026326 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026358 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026379 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026408 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026437 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026471 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.027886 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run" (OuterVolumeSpecName: "var-run") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.028735 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache" (OuterVolumeSpecName: "cache") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.028773 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib" (OuterVolumeSpecName: "var-lib") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.028988 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.029312 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts" (OuterVolumeSpecName: "scripts") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.029996 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log" (OuterVolumeSpecName: "var-log") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.030253 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.030664 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock" (OuterVolumeSpecName: "lock") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.033865 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034323 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034327 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf" (OuterVolumeSpecName: "kube-api-access-rqmjf") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "kube-api-access-rqmjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034345 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk" (OuterVolumeSpecName: "kube-api-access-77dwk") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "kube-api-access-77dwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034326 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034414 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz" (OuterVolumeSpecName: "kube-api-access-482rz") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "kube-api-access-482rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034975 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts" (OuterVolumeSpecName: "scripts") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.068035 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.105505 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data" (OuterVolumeSpecName: "config-data") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129206 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqmjf\" (UniqueName: \"kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129235 5136 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129244 5136 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129253 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129280 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129290 5136 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129298 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129308 5136 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129316 5136 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129325 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129334 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-482rz\" (UniqueName: \"kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129343 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77dwk\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129352 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129360 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129368 5136 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129377 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129384 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.142010 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.230971 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.291397 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.295690 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ldp4w_5f48f721-42c9-4f2b-a461-2ad47a1dea3d/ovs-vswitchd/0.log" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.296482 5136 generic.go:334] "Generic (PLEG): container finished" podID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" exitCode=137 Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.296533 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.296629 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerDied","Data":"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.296670 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerDied","Data":"ecef44b4bd97cd40f7c1c2de9472cdb09460ec1aa1b9eb32b1b7e366da3578d0"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.296689 5136 scope.go:117] "RemoveContainer" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.300661 5136 generic.go:334] "Generic (PLEG): container finished" podID="31adef78-59fe-4327-9586-0c12177c7bb7" containerID="70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c" exitCode=137 Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.300711 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerDied","Data":"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.300738 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerDied","Data":"68a0376e88b4b3da7cb1aed58c92f9e17081913aac9827be120b7a59b01a2ab0"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.300878 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.310574 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a" exitCode=137 Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.310608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.310634 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"d8cd982e91f64705da20c6a48fa3020dac8ffb0c31aec91bfc9c77ff27912742"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.310728 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.327528 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.332297 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.332993 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.338128 5136 scope.go:117] "RemoveContainer" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.358037 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.372339 5136 scope.go:117] "RemoveContainer" containerID="251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.377760 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.383262 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.388875 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.406120 5136 scope.go:117] "RemoveContainer" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.406926 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c\": container with ID starting with f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c not found: ID does not exist" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.406967 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c"} err="failed to get container status \"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c\": rpc error: code = NotFound desc = could not find container \"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c\": container with ID starting with f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.407000 5136 scope.go:117] "RemoveContainer" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.408017 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58\": container with ID starting with 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 not found: ID does not exist" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.408044 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58"} err="failed to get container status \"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58\": rpc error: code = NotFound desc = could not find container \"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58\": container with ID starting with 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.408058 5136 scope.go:117] "RemoveContainer" containerID="251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.408361 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56\": container with ID starting with 251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56 not found: ID does not exist" containerID="251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.408381 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56"} err="failed to get container status \"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56\": rpc error: code = NotFound desc = could not find container \"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56\": container with ID starting with 251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.408396 5136 scope.go:117] "RemoveContainer" containerID="46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.429797 5136 scope.go:117] "RemoveContainer" containerID="70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.447097 5136 scope.go:117] "RemoveContainer" containerID="46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.447646 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb\": container with ID starting with 46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb not found: ID does not exist" containerID="46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.447695 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb"} err="failed to get container status \"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb\": rpc error: code = NotFound desc = could not find container \"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb\": container with ID starting with 46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.447723 5136 scope.go:117] "RemoveContainer" containerID="70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.448074 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c\": container with ID starting with 70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c not found: ID does not exist" containerID="70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.448099 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c"} err="failed to get container status \"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c\": rpc error: code = NotFound desc = could not find container \"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c\": container with ID starting with 70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.448114 5136 scope.go:117] "RemoveContainer" containerID="f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.466466 5136 scope.go:117] "RemoveContainer" containerID="32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.481546 5136 scope.go:117] "RemoveContainer" containerID="34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.503971 5136 scope.go:117] "RemoveContainer" containerID="81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.585576 5136 scope.go:117] "RemoveContainer" containerID="1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.609451 5136 scope.go:117] "RemoveContainer" containerID="9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.629680 5136 scope.go:117] "RemoveContainer" containerID="09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.645552 5136 scope.go:117] "RemoveContainer" containerID="e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.661432 5136 scope.go:117] "RemoveContainer" containerID="cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.676307 5136 scope.go:117] "RemoveContainer" containerID="8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.692939 5136 scope.go:117] "RemoveContainer" containerID="2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.706974 5136 scope.go:117] "RemoveContainer" containerID="c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.725195 5136 scope.go:117] "RemoveContainer" containerID="ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.740554 5136 scope.go:117] "RemoveContainer" containerID="2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.756183 5136 scope.go:117] "RemoveContainer" containerID="83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.776877 5136 scope.go:117] "RemoveContainer" containerID="f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.777368 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a\": container with ID starting with f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a not found: ID does not exist" containerID="f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.777412 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a"} err="failed to get container status \"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a\": rpc error: code = NotFound desc = could not find container \"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a\": container with ID starting with f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.777446 5136 scope.go:117] "RemoveContainer" containerID="32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.777867 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949\": container with ID starting with 32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949 not found: ID does not exist" containerID="32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.777899 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949"} err="failed to get container status \"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949\": rpc error: code = NotFound desc = could not find container \"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949\": container with ID starting with 32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.777921 5136 scope.go:117] "RemoveContainer" containerID="34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.778189 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0\": container with ID starting with 34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0 not found: ID does not exist" containerID="34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778219 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0"} err="failed to get container status \"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0\": rpc error: code = NotFound desc = could not find container \"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0\": container with ID starting with 34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778241 5136 scope.go:117] "RemoveContainer" containerID="81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.778547 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c\": container with ID starting with 81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c not found: ID does not exist" containerID="81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778573 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c"} err="failed to get container status \"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c\": rpc error: code = NotFound desc = could not find container \"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c\": container with ID starting with 81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778588 5136 scope.go:117] "RemoveContainer" containerID="1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.778828 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532\": container with ID starting with 1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532 not found: ID does not exist" containerID="1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778871 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532"} err="failed to get container status \"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532\": rpc error: code = NotFound desc = could not find container \"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532\": container with ID starting with 1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778885 5136 scope.go:117] "RemoveContainer" containerID="9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.779258 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786\": container with ID starting with 9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786 not found: ID does not exist" containerID="9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779280 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786"} err="failed to get container status \"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786\": rpc error: code = NotFound desc = could not find container \"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786\": container with ID starting with 9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779295 5136 scope.go:117] "RemoveContainer" containerID="09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.779540 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43\": container with ID starting with 09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43 not found: ID does not exist" containerID="09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779564 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43"} err="failed to get container status \"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43\": rpc error: code = NotFound desc = could not find container \"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43\": container with ID starting with 09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779576 5136 scope.go:117] "RemoveContainer" containerID="e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.779787 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67\": container with ID starting with e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67 not found: ID does not exist" containerID="e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779835 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67"} err="failed to get container status \"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67\": rpc error: code = NotFound desc = could not find container \"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67\": container with ID starting with e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779855 5136 scope.go:117] "RemoveContainer" containerID="cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.780164 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6\": container with ID starting with cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6 not found: ID does not exist" containerID="cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.780190 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6"} err="failed to get container status \"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6\": rpc error: code = NotFound desc = could not find container \"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6\": container with ID starting with cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.780205 5136 scope.go:117] "RemoveContainer" containerID="8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.780568 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3\": container with ID starting with 8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3 not found: ID does not exist" containerID="8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.780607 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3"} err="failed to get container status \"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3\": rpc error: code = NotFound desc = could not find container \"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3\": container with ID starting with 8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.780647 5136 scope.go:117] "RemoveContainer" containerID="2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.780995 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3\": container with ID starting with 2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3 not found: ID does not exist" containerID="2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781014 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3"} err="failed to get container status \"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3\": rpc error: code = NotFound desc = could not find container \"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3\": container with ID starting with 2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781028 5136 scope.go:117] "RemoveContainer" containerID="c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.781263 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346\": container with ID starting with c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346 not found: ID does not exist" containerID="c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781292 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346"} err="failed to get container status \"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346\": rpc error: code = NotFound desc = could not find container \"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346\": container with ID starting with c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781311 5136 scope.go:117] "RemoveContainer" containerID="ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.781643 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539\": container with ID starting with ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539 not found: ID does not exist" containerID="ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781663 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539"} err="failed to get container status \"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539\": rpc error: code = NotFound desc = could not find container \"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539\": container with ID starting with ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781677 5136 scope.go:117] "RemoveContainer" containerID="2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.781903 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa\": container with ID starting with 2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa not found: ID does not exist" containerID="2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781922 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa"} err="failed to get container status \"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa\": rpc error: code = NotFound desc = could not find container \"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa\": container with ID starting with 2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781936 5136 scope.go:117] "RemoveContainer" containerID="83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.782164 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d\": container with ID starting with 83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d not found: ID does not exist" containerID="83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.782196 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d"} err="failed to get container status \"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d\": rpc error: code = NotFound desc = could not find container \"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d\": container with ID starting with 83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d not found: ID does not exist" Mar 20 07:16:20 crc kubenswrapper[5136]: I0320 07:16:20.404353 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" path="/var/lib/kubelet/pods/31adef78-59fe-4327-9586-0c12177c7bb7/volumes" Mar 20 07:16:20 crc kubenswrapper[5136]: I0320 07:16:20.405874 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" path="/var/lib/kubelet/pods/5f48f721-42c9-4f2b-a461-2ad47a1dea3d/volumes" Mar 20 07:16:20 crc kubenswrapper[5136]: I0320 07:16:20.406552 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" path="/var/lib/kubelet/pods/dd944fb6-1517-4f5b-b579-79d8f1f3da19/volumes" Mar 20 07:16:20 crc kubenswrapper[5136]: I0320 07:16:20.541994 5136 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8b1461d1-f963-40b0-8cad-a5b2735eedcc"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8b1461d1-f963-40b0-8cad-a5b2735eedcc] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8b1461d1_f963_40b0_8cad_a5b2735eedcc.slice" Mar 20 07:16:20 crc kubenswrapper[5136]: E0320 07:16:20.542046 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod8b1461d1-f963-40b0-8cad-a5b2735eedcc] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod8b1461d1-f963-40b0-8cad-a5b2735eedcc] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8b1461d1_f963_40b0_8cad_a5b2735eedcc.slice" pod="openstack/ovsdbserver-nb-0" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" Mar 20 07:16:21 crc kubenswrapper[5136]: I0320 07:16:21.347395 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 07:16:21 crc kubenswrapper[5136]: I0320 07:16:21.377280 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:16:21 crc kubenswrapper[5136]: I0320 07:16:21.388666 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:16:22 crc kubenswrapper[5136]: I0320 07:16:22.404461 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" path="/var/lib/kubelet/pods/8b1461d1-f963-40b0-8cad-a5b2735eedcc/volumes" Mar 20 07:16:30 crc kubenswrapper[5136]: I0320 07:16:30.433345 5136 scope.go:117] "RemoveContainer" containerID="a922963e448f67de5c7ef7e39ae9a8fe1051c4a0abe704c7b54dc25c09d90caa" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.323153 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.323987 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="cinder-scheduler" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324001 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="cinder-scheduler" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324012 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324018 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324029 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324035 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324043 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server-init" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324048 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server-init" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324058 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324063 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-server" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324073 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-reaper" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324079 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-reaper" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324088 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324093 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324128 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324134 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324145 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324151 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324161 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-httpd" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324167 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-httpd" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324180 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324185 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324193 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-api" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324198 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-api" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324208 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324213 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324222 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="swift-recon-cron" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324229 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="swift-recon-cron" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324238 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324243 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-server" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324254 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324260 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324275 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="rsync" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324281 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="rsync" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324291 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="probe" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324296 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="probe" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324307 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324313 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324322 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1410b1-69b7-42b6-85c9-967dbbc05b08" containerName="oc" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324328 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1410b1-69b7-42b6-85c9-967dbbc05b08" containerName="oc" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324336 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324343 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-server" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324354 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324363 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324374 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324381 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324389 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-expirer" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324395 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-expirer" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324538 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324553 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-api" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324565 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324574 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324587 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1410b1-69b7-42b6-85c9-967dbbc05b08" containerName="oc" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324599 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="swift-recon-cron" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324611 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324622 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324632 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-httpd" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324647 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324659 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="probe" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324674 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="cinder-scheduler" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324685 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324696 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324705 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324717 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-reaper" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324728 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324739 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324752 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324763 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="rsync" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324775 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324784 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-expirer" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324791 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.329728 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.337728 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.416367 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.416538 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.416600 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h2tt\" (UniqueName: \"kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.517605 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.517675 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.517706 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h2tt\" (UniqueName: \"kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.518135 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.518230 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.541801 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h2tt\" (UniqueName: \"kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.650187 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:40 crc kubenswrapper[5136]: I0320 07:16:40.078542 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:40 crc kubenswrapper[5136]: I0320 07:16:40.494257 5136 generic.go:334] "Generic (PLEG): container finished" podID="ede5c080-8aac-453b-8d12-89d54e561a16" containerID="e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee" exitCode=0 Mar 20 07:16:40 crc kubenswrapper[5136]: I0320 07:16:40.494309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerDied","Data":"e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee"} Mar 20 07:16:40 crc kubenswrapper[5136]: I0320 07:16:40.494559 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerStarted","Data":"c663be635c978fe0eadab9caed934264165c0da1036ee4ab855b93eaa6e26937"} Mar 20 07:16:41 crc kubenswrapper[5136]: I0320 07:16:41.503298 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerStarted","Data":"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721"} Mar 20 07:16:42 crc kubenswrapper[5136]: I0320 07:16:42.513352 5136 generic.go:334] "Generic (PLEG): container finished" podID="ede5c080-8aac-453b-8d12-89d54e561a16" containerID="7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721" exitCode=0 Mar 20 07:16:42 crc kubenswrapper[5136]: I0320 07:16:42.513433 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerDied","Data":"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721"} Mar 20 07:16:43 crc kubenswrapper[5136]: I0320 07:16:43.524477 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerStarted","Data":"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c"} Mar 20 07:16:43 crc kubenswrapper[5136]: I0320 07:16:43.550840 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n67v6" podStartSLOduration=1.920277851 podStartE2EDuration="4.550800825s" podCreationTimestamp="2026-03-20 07:16:39 +0000 UTC" firstStartedPulling="2026-03-20 07:16:40.495679256 +0000 UTC m=+1632.754990407" lastFinishedPulling="2026-03-20 07:16:43.12620223 +0000 UTC m=+1635.385513381" observedRunningTime="2026-03-20 07:16:43.544965863 +0000 UTC m=+1635.804277044" watchObservedRunningTime="2026-03-20 07:16:43.550800825 +0000 UTC m=+1635.810111976" Mar 20 07:16:49 crc kubenswrapper[5136]: I0320 07:16:49.650729 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:49 crc kubenswrapper[5136]: I0320 07:16:49.651314 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:49 crc kubenswrapper[5136]: I0320 07:16:49.690765 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:50 crc kubenswrapper[5136]: I0320 07:16:50.622156 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:50 crc kubenswrapper[5136]: I0320 07:16:50.670175 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:52 crc kubenswrapper[5136]: I0320 07:16:52.607923 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n67v6" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="registry-server" containerID="cri-o://eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c" gracePeriod=2 Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.052418 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.204668 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities\") pod \"ede5c080-8aac-453b-8d12-89d54e561a16\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.204727 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h2tt\" (UniqueName: \"kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt\") pod \"ede5c080-8aac-453b-8d12-89d54e561a16\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.204768 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content\") pod \"ede5c080-8aac-453b-8d12-89d54e561a16\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.206463 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities" (OuterVolumeSpecName: "utilities") pod "ede5c080-8aac-453b-8d12-89d54e561a16" (UID: "ede5c080-8aac-453b-8d12-89d54e561a16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.210465 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt" (OuterVolumeSpecName: "kube-api-access-7h2tt") pod "ede5c080-8aac-453b-8d12-89d54e561a16" (UID: "ede5c080-8aac-453b-8d12-89d54e561a16"). InnerVolumeSpecName "kube-api-access-7h2tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.266016 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ede5c080-8aac-453b-8d12-89d54e561a16" (UID: "ede5c080-8aac-453b-8d12-89d54e561a16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.305844 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.305885 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h2tt\" (UniqueName: \"kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.305896 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.619793 5136 generic.go:334] "Generic (PLEG): container finished" podID="ede5c080-8aac-453b-8d12-89d54e561a16" containerID="eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c" exitCode=0 Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.619902 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerDied","Data":"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c"} Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.619940 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerDied","Data":"c663be635c978fe0eadab9caed934264165c0da1036ee4ab855b93eaa6e26937"} Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.619967 5136 scope.go:117] "RemoveContainer" containerID="eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.620097 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.658660 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.664960 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.675400 5136 scope.go:117] "RemoveContainer" containerID="7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.704305 5136 scope.go:117] "RemoveContainer" containerID="e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.723954 5136 scope.go:117] "RemoveContainer" containerID="eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c" Mar 20 07:16:53 crc kubenswrapper[5136]: E0320 07:16:53.724490 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c\": container with ID starting with eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c not found: ID does not exist" containerID="eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.724539 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c"} err="failed to get container status \"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c\": rpc error: code = NotFound desc = could not find container \"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c\": container with ID starting with eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c not found: ID does not exist" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.724570 5136 scope.go:117] "RemoveContainer" containerID="7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721" Mar 20 07:16:53 crc kubenswrapper[5136]: E0320 07:16:53.724976 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721\": container with ID starting with 7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721 not found: ID does not exist" containerID="7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.725012 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721"} err="failed to get container status \"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721\": rpc error: code = NotFound desc = could not find container \"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721\": container with ID starting with 7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721 not found: ID does not exist" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.725041 5136 scope.go:117] "RemoveContainer" containerID="e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee" Mar 20 07:16:53 crc kubenswrapper[5136]: E0320 07:16:53.725399 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee\": container with ID starting with e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee not found: ID does not exist" containerID="e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.725466 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee"} err="failed to get container status \"e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee\": rpc error: code = NotFound desc = could not find container \"e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee\": container with ID starting with e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee not found: ID does not exist" Mar 20 07:16:54 crc kubenswrapper[5136]: I0320 07:16:54.414099 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" path="/var/lib/kubelet/pods/ede5c080-8aac-453b-8d12-89d54e561a16/volumes" Mar 20 07:17:30 crc kubenswrapper[5136]: I0320 07:17:30.996840 5136 scope.go:117] "RemoveContainer" containerID="7f81f78f97fc5d48f48b6354b794c050f707e5b35fc6d46c7df2de9e4878960b" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.028451 5136 scope.go:117] "RemoveContainer" containerID="34ee2cccbe30631969d3aa93a1b8264849d8d5334e0c97572f21e0a6e95e8e26" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.059030 5136 scope.go:117] "RemoveContainer" containerID="933fdd395d96426dd2696ed053dd4cefada8c95df3be0a52f3cc68ad68f9aebb" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.084000 5136 scope.go:117] "RemoveContainer" containerID="bd48417c8a8842903b86c0b0297625af601775c012346c6e4a42ced3c9d81a5c" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.107793 5136 scope.go:117] "RemoveContainer" containerID="c83952221ac9ae15d237b01aa417d2a8651bd6786c0034250cebe0e17be31690" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.129202 5136 scope.go:117] "RemoveContainer" containerID="2e4cee4a85209760afcb1fc4e1920e495e69a4a4c4fbdedacaa3ff6869eb619f" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.152942 5136 scope.go:117] "RemoveContainer" containerID="f8f2b333bca19081fee1627c5e046485a6793b7781e892f02c6a8b08ca392e57" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.176327 5136 scope.go:117] "RemoveContainer" containerID="dc6f042f4a1f3f8ba50fa65cef930cd8040f1e880b0843b1b3beecf9065681fb" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.205905 5136 scope.go:117] "RemoveContainer" containerID="ca4d6aff6fa4147c69ade98576093b5726d3ffc5a53c4a7f48a1261885cf9eaf" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.226143 5136 scope.go:117] "RemoveContainer" containerID="df74ea59bf43247509097578b0b44714fcb954b2204d1d34decc8550e92f3f6e" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.245564 5136 scope.go:117] "RemoveContainer" containerID="55e70c80be714d08791bbb875a2885eb056808546361bafa1ce59b4a2b4afd94" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.261654 5136 scope.go:117] "RemoveContainer" containerID="7056c10c02d573c52be9cb6646cfd2016f281214c76d5613dade95a4d450b824" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.290218 5136 scope.go:117] "RemoveContainer" containerID="61abc8440208cd19caa61d866cd42cc249d0d527cfebb488be887ccce4bdea72" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.306572 5136 scope.go:117] "RemoveContainer" containerID="0a42176f6839fd2b1fa46f8a90c2d73b4c4eaa11385cb9c81bf9e24e01ecf323" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.150869 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566518-mjsfh"] Mar 20 07:18:00 crc kubenswrapper[5136]: E0320 07:18:00.151921 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="extract-utilities" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.151950 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="extract-utilities" Mar 20 07:18:00 crc kubenswrapper[5136]: E0320 07:18:00.151998 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="extract-content" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.152017 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="extract-content" Mar 20 07:18:00 crc kubenswrapper[5136]: E0320 07:18:00.152070 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="registry-server" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.152087 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="registry-server" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.152346 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="registry-server" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.153109 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.156508 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.156693 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.156716 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.169484 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566518-mjsfh"] Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.346865 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvcfv\" (UniqueName: \"kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv\") pod \"auto-csr-approver-29566518-mjsfh\" (UID: \"6e858127-6d5f-4dcd-828c-a6f7b892c4dc\") " pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.448070 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvcfv\" (UniqueName: \"kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv\") pod \"auto-csr-approver-29566518-mjsfh\" (UID: \"6e858127-6d5f-4dcd-828c-a6f7b892c4dc\") " pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.465303 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvcfv\" (UniqueName: \"kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv\") pod \"auto-csr-approver-29566518-mjsfh\" (UID: \"6e858127-6d5f-4dcd-828c-a6f7b892c4dc\") " pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.477892 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.881085 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566518-mjsfh"] Mar 20 07:18:01 crc kubenswrapper[5136]: I0320 07:18:01.224306 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" event={"ID":"6e858127-6d5f-4dcd-828c-a6f7b892c4dc","Type":"ContainerStarted","Data":"65b618d7fd02f04a380bdd119d1b6cb7996987df0d62f6e11968a52998757989"} Mar 20 07:18:03 crc kubenswrapper[5136]: I0320 07:18:03.241457 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" event={"ID":"6e858127-6d5f-4dcd-828c-a6f7b892c4dc","Type":"ContainerStarted","Data":"7af0b7f0c5b3f60705910e6fc269402a40ca078da17abc7ff26594b5a890f02e"} Mar 20 07:18:03 crc kubenswrapper[5136]: I0320 07:18:03.264048 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" podStartSLOduration=1.237537108 podStartE2EDuration="3.26402975s" podCreationTimestamp="2026-03-20 07:18:00 +0000 UTC" firstStartedPulling="2026-03-20 07:18:00.893473126 +0000 UTC m=+1713.152784287" lastFinishedPulling="2026-03-20 07:18:02.919965758 +0000 UTC m=+1715.179276929" observedRunningTime="2026-03-20 07:18:03.260991055 +0000 UTC m=+1715.520302196" watchObservedRunningTime="2026-03-20 07:18:03.26402975 +0000 UTC m=+1715.523340901" Mar 20 07:18:04 crc kubenswrapper[5136]: I0320 07:18:04.252732 5136 generic.go:334] "Generic (PLEG): container finished" podID="6e858127-6d5f-4dcd-828c-a6f7b892c4dc" containerID="7af0b7f0c5b3f60705910e6fc269402a40ca078da17abc7ff26594b5a890f02e" exitCode=0 Mar 20 07:18:04 crc kubenswrapper[5136]: I0320 07:18:04.252884 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" event={"ID":"6e858127-6d5f-4dcd-828c-a6f7b892c4dc","Type":"ContainerDied","Data":"7af0b7f0c5b3f60705910e6fc269402a40ca078da17abc7ff26594b5a890f02e"} Mar 20 07:18:05 crc kubenswrapper[5136]: I0320 07:18:05.591306 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:05 crc kubenswrapper[5136]: I0320 07:18:05.757301 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvcfv\" (UniqueName: \"kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv\") pod \"6e858127-6d5f-4dcd-828c-a6f7b892c4dc\" (UID: \"6e858127-6d5f-4dcd-828c-a6f7b892c4dc\") " Mar 20 07:18:05 crc kubenswrapper[5136]: I0320 07:18:05.764985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv" (OuterVolumeSpecName: "kube-api-access-rvcfv") pod "6e858127-6d5f-4dcd-828c-a6f7b892c4dc" (UID: "6e858127-6d5f-4dcd-828c-a6f7b892c4dc"). InnerVolumeSpecName "kube-api-access-rvcfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:18:05 crc kubenswrapper[5136]: I0320 07:18:05.859571 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvcfv\" (UniqueName: \"kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv\") on node \"crc\" DevicePath \"\"" Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.276618 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" event={"ID":"6e858127-6d5f-4dcd-828c-a6f7b892c4dc","Type":"ContainerDied","Data":"65b618d7fd02f04a380bdd119d1b6cb7996987df0d62f6e11968a52998757989"} Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.276694 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65b618d7fd02f04a380bdd119d1b6cb7996987df0d62f6e11968a52998757989" Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.276799 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.341737 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566512-lrvjf"] Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.348747 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566512-lrvjf"] Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.405238 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ace6934-986e-463e-8e10-ea2d38d8657b" path="/var/lib/kubelet/pods/4ace6934-986e-463e-8e10-ea2d38d8657b/volumes" Mar 20 07:18:15 crc kubenswrapper[5136]: I0320 07:18:15.822247 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:18:15 crc kubenswrapper[5136]: I0320 07:18:15.822948 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.525528 5136 scope.go:117] "RemoveContainer" containerID="031d15e3c6d48fb60bf7992b603ae52f0ad57d2692789532d8ff3e43150b8a62" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.574361 5136 scope.go:117] "RemoveContainer" containerID="0ed02eb432d6f42e0d9bf84365b12025d2b0ecfccb688b075f04ab7b6e93a89d" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.597990 5136 scope.go:117] "RemoveContainer" containerID="27458c6d0396483e2bf32a7b77f963fd7b5299335805aa8a1978233da54516f3" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.644788 5136 scope.go:117] "RemoveContainer" containerID="624feab47793180e2f843804104146a4e3de4528636c1ebc9f47f7172993b072" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.690781 5136 scope.go:117] "RemoveContainer" containerID="52c9595f9d03cfa1e4df7232d34e2bf01954bbb2d3d7f55b6c4baddaa2f4853a" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.728315 5136 scope.go:117] "RemoveContainer" containerID="47ae9136918142f0659195583b1d45f1b8d098ff54fd4db577e632c9d504d4ec" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.756151 5136 scope.go:117] "RemoveContainer" containerID="80afd4ebec7d57a2a5f4e5804fe0cafa6290530e8266af5fe943abb82f8b0a3e" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.781358 5136 scope.go:117] "RemoveContainer" containerID="152cfd50a682e083fe5fcb83f9e826724106ecbcb51dbd391cff3907a957fa98" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.803466 5136 scope.go:117] "RemoveContainer" containerID="be9df9297d087d9b583ba3c8a236fca6fd4fd729e25496c50522e980d7021c09" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.836973 5136 scope.go:117] "RemoveContainer" containerID="5e947f339491ac05ba12abc9cb95630dcf48840148917141c549dbda5ca4a25f" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.852941 5136 scope.go:117] "RemoveContainer" containerID="22bd79d8d32272633a42a92ee1e9e96d3d3259073a33ca0ea587ca787429e836" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.884499 5136 scope.go:117] "RemoveContainer" containerID="dd50ea3e8d708d6e3b7b256a2ea07c9211cc4921a494661346061b12daf9a3f3" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.913228 5136 scope.go:117] "RemoveContainer" containerID="ca34610c300fb63b0b8b7fa75b8b5e36ec0f7e9d15dbda229381348a1e3e55be" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.938306 5136 scope.go:117] "RemoveContainer" containerID="685537caeff80758998e736f40d87da6358ae395ce8425cb44887ce77751a0c9" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.954054 5136 scope.go:117] "RemoveContainer" containerID="32a4b8b42d71b772e9ef90a830d8bb2691b008e79e6ac5eedc1a261ab6fb23b2" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.968078 5136 scope.go:117] "RemoveContainer" containerID="8f39b26d5a3a98eb4a0bd3688d06d7a44225eee0ee50099fc60fd5816beb4256" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.986953 5136 scope.go:117] "RemoveContainer" containerID="517443469d4fcf677c53f3f830f5a94c22bc034822199c2c81fb70956a791274" Mar 20 07:18:45 crc kubenswrapper[5136]: I0320 07:18:45.822104 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:18:45 crc kubenswrapper[5136]: I0320 07:18:45.822663 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:19:15 crc kubenswrapper[5136]: I0320 07:19:15.822586 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:19:15 crc kubenswrapper[5136]: I0320 07:19:15.823282 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:19:15 crc kubenswrapper[5136]: I0320 07:19:15.823390 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:19:15 crc kubenswrapper[5136]: I0320 07:19:15.824312 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:19:15 crc kubenswrapper[5136]: I0320 07:19:15.824421 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" gracePeriod=600 Mar 20 07:19:15 crc kubenswrapper[5136]: E0320 07:19:15.952436 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:19:16 crc kubenswrapper[5136]: I0320 07:19:16.933090 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" exitCode=0 Mar 20 07:19:16 crc kubenswrapper[5136]: I0320 07:19:16.933149 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d"} Mar 20 07:19:16 crc kubenswrapper[5136]: I0320 07:19:16.933484 5136 scope.go:117] "RemoveContainer" containerID="dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6" Mar 20 07:19:16 crc kubenswrapper[5136]: I0320 07:19:16.934132 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:19:16 crc kubenswrapper[5136]: E0320 07:19:16.934583 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:19:30 crc kubenswrapper[5136]: I0320 07:19:30.396712 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:19:30 crc kubenswrapper[5136]: E0320 07:19:30.397597 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.215576 5136 scope.go:117] "RemoveContainer" containerID="6963baa6fe7d9db38870a70531888cfee8f7d44c3eff1597da33cf867ee591c8" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.246454 5136 scope.go:117] "RemoveContainer" containerID="89325ee63cd0d5963c16a3cd15b18e01966cac4c73b616f8222bb05ec0a94fbe" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.297682 5136 scope.go:117] "RemoveContainer" containerID="14f94b6d1dd07b874e83aed25b1716c42ede7203afe8fc38064921b976f5c65d" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.334739 5136 scope.go:117] "RemoveContainer" containerID="01aa356b57e965220f79e7a24da86937ea014054be6bb673baf18c8bb2471582" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.351908 5136 scope.go:117] "RemoveContainer" containerID="3e0d0bab07ba893f2ec5b9f186f6e1ac58691443de33a6064347527effa3dc1f" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.368685 5136 scope.go:117] "RemoveContainer" containerID="9cff72e5160a41a8305e76e0221624a76437830286641f5e18a9ed4e7ae3e23a" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.438315 5136 scope.go:117] "RemoveContainer" containerID="76ecc1efaf0109117b2021b8dc8f89423ec738c34b9f16b5ea8ada8e167cdf99" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.465124 5136 scope.go:117] "RemoveContainer" containerID="22e326ee5e74b9ee2e3ad6076eac75a725689f97883ae8bb80d1de284edb7a74" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.485351 5136 scope.go:117] "RemoveContainer" containerID="0d231656eec1735b1a5bc9e9719bbfb1dc2f5b357bdf072349808ab12f944278" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.503346 5136 scope.go:117] "RemoveContainer" containerID="605e2f1b6fdab04852864ae8ba9a1933cc6fbe478b172080fd10f5d23b52f0fe" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.543272 5136 scope.go:117] "RemoveContainer" containerID="edc5e28eb62af197edd849dc06e38cdd2bebac736971174a120fd4afd95e52b2" Mar 20 07:19:43 crc kubenswrapper[5136]: I0320 07:19:43.396508 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:19:43 crc kubenswrapper[5136]: E0320 07:19:43.397188 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:19:58 crc kubenswrapper[5136]: I0320 07:19:58.406052 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:19:58 crc kubenswrapper[5136]: E0320 07:19:58.406900 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.137615 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566520-gp87b"] Mar 20 07:20:00 crc kubenswrapper[5136]: E0320 07:20:00.137961 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e858127-6d5f-4dcd-828c-a6f7b892c4dc" containerName="oc" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.137978 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e858127-6d5f-4dcd-828c-a6f7b892c4dc" containerName="oc" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.138150 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e858127-6d5f-4dcd-828c-a6f7b892c4dc" containerName="oc" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.138677 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.141598 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.141680 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.141912 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.144866 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566520-gp87b"] Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.327237 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjsmr\" (UniqueName: \"kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr\") pod \"auto-csr-approver-29566520-gp87b\" (UID: \"1114e255-4c25-4a30-88fb-4393c90a6d27\") " pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.428951 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjsmr\" (UniqueName: \"kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr\") pod \"auto-csr-approver-29566520-gp87b\" (UID: \"1114e255-4c25-4a30-88fb-4393c90a6d27\") " pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.461858 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjsmr\" (UniqueName: \"kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr\") pod \"auto-csr-approver-29566520-gp87b\" (UID: \"1114e255-4c25-4a30-88fb-4393c90a6d27\") " pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.479168 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.718119 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566520-gp87b"] Mar 20 07:20:01 crc kubenswrapper[5136]: I0320 07:20:01.311186 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566520-gp87b" event={"ID":"1114e255-4c25-4a30-88fb-4393c90a6d27","Type":"ContainerStarted","Data":"271f92a3a1e2dadc93f97495f414e36c343b5e5ecbb24481158f24020bc36405"} Mar 20 07:20:02 crc kubenswrapper[5136]: I0320 07:20:02.319757 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566520-gp87b" event={"ID":"1114e255-4c25-4a30-88fb-4393c90a6d27","Type":"ContainerStarted","Data":"cbc3d1a89274343d759e3c647c542017f95e292a6f7b2eb7b7c31cedebd75f6f"} Mar 20 07:20:02 crc kubenswrapper[5136]: I0320 07:20:02.341900 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566520-gp87b" podStartSLOduration=1.159958189 podStartE2EDuration="2.341873929s" podCreationTimestamp="2026-03-20 07:20:00 +0000 UTC" firstStartedPulling="2026-03-20 07:20:00.72711392 +0000 UTC m=+1832.986425081" lastFinishedPulling="2026-03-20 07:20:01.90902966 +0000 UTC m=+1834.168340821" observedRunningTime="2026-03-20 07:20:02.332500548 +0000 UTC m=+1834.591811739" watchObservedRunningTime="2026-03-20 07:20:02.341873929 +0000 UTC m=+1834.601185090" Mar 20 07:20:03 crc kubenswrapper[5136]: I0320 07:20:03.334039 5136 generic.go:334] "Generic (PLEG): container finished" podID="1114e255-4c25-4a30-88fb-4393c90a6d27" containerID="cbc3d1a89274343d759e3c647c542017f95e292a6f7b2eb7b7c31cedebd75f6f" exitCode=0 Mar 20 07:20:03 crc kubenswrapper[5136]: I0320 07:20:03.334082 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566520-gp87b" event={"ID":"1114e255-4c25-4a30-88fb-4393c90a6d27","Type":"ContainerDied","Data":"cbc3d1a89274343d759e3c647c542017f95e292a6f7b2eb7b7c31cedebd75f6f"} Mar 20 07:20:04 crc kubenswrapper[5136]: I0320 07:20:04.652197 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:04 crc kubenswrapper[5136]: I0320 07:20:04.795838 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjsmr\" (UniqueName: \"kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr\") pod \"1114e255-4c25-4a30-88fb-4393c90a6d27\" (UID: \"1114e255-4c25-4a30-88fb-4393c90a6d27\") " Mar 20 07:20:04 crc kubenswrapper[5136]: I0320 07:20:04.803300 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr" (OuterVolumeSpecName: "kube-api-access-zjsmr") pod "1114e255-4c25-4a30-88fb-4393c90a6d27" (UID: "1114e255-4c25-4a30-88fb-4393c90a6d27"). InnerVolumeSpecName "kube-api-access-zjsmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:20:04 crc kubenswrapper[5136]: I0320 07:20:04.897998 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjsmr\" (UniqueName: \"kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr\") on node \"crc\" DevicePath \"\"" Mar 20 07:20:05 crc kubenswrapper[5136]: I0320 07:20:05.354588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566520-gp87b" event={"ID":"1114e255-4c25-4a30-88fb-4393c90a6d27","Type":"ContainerDied","Data":"271f92a3a1e2dadc93f97495f414e36c343b5e5ecbb24481158f24020bc36405"} Mar 20 07:20:05 crc kubenswrapper[5136]: I0320 07:20:05.354949 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="271f92a3a1e2dadc93f97495f414e36c343b5e5ecbb24481158f24020bc36405" Mar 20 07:20:05 crc kubenswrapper[5136]: I0320 07:20:05.354730 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:05 crc kubenswrapper[5136]: I0320 07:20:05.393279 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566514-w8pvt"] Mar 20 07:20:05 crc kubenswrapper[5136]: I0320 07:20:05.398351 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566514-w8pvt"] Mar 20 07:20:06 crc kubenswrapper[5136]: I0320 07:20:06.405286 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f034b011-ac81-4ef1-aa8b-39164a6c98ee" path="/var/lib/kubelet/pods/f034b011-ac81-4ef1-aa8b-39164a6c98ee/volumes" Mar 20 07:20:10 crc kubenswrapper[5136]: I0320 07:20:10.396562 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:20:10 crc kubenswrapper[5136]: E0320 07:20:10.397304 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:20:22 crc kubenswrapper[5136]: I0320 07:20:22.397120 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:20:22 crc kubenswrapper[5136]: E0320 07:20:22.399343 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.666554 5136 scope.go:117] "RemoveContainer" containerID="27c023c35669b2ba848d9e65c7d0898f10c49020818f9d3f19dccfc3afaa8e46" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.698196 5136 scope.go:117] "RemoveContainer" containerID="0b0d8b9ccdc0ce4c2fbd89f7f74e2b08044fdede201e9fe4c0352ac82e9375a6" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.732151 5136 scope.go:117] "RemoveContainer" containerID="2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.746957 5136 scope.go:117] "RemoveContainer" containerID="57f37d9f2fcaf7ecb1593abeb0dac4f77898bf18e2e7d2992aaecaca2cb60ac9" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.769782 5136 scope.go:117] "RemoveContainer" containerID="5b986819d9a90e1b85c9700743fca946f3bc072f13b32ee80b0ccb0986a0c382" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.783661 5136 scope.go:117] "RemoveContainer" containerID="d40caae4293d07d1be57a3b0fe6a0c2358da7d3ca34831236dc80ae177c4c105" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.818755 5136 scope.go:117] "RemoveContainer" containerID="f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.835092 5136 scope.go:117] "RemoveContainer" containerID="a6ef812d133f600bf2d930bb51a86bd9525704f16b2a680bf46ad2719737b44f" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.871235 5136 scope.go:117] "RemoveContainer" containerID="4af2afde3b60e503cf744acf4fb08477b7ec46cb1b30cfb589608690a2df8849" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.886606 5136 scope.go:117] "RemoveContainer" containerID="d1609ae90ac31423489405692434f7f762e8aa11262621b19e053461b1226222" Mar 20 07:20:35 crc kubenswrapper[5136]: I0320 07:20:35.396554 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:20:35 crc kubenswrapper[5136]: E0320 07:20:35.397282 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:20:46 crc kubenswrapper[5136]: I0320 07:20:46.397126 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:20:46 crc kubenswrapper[5136]: E0320 07:20:46.397856 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:00 crc kubenswrapper[5136]: I0320 07:21:00.397288 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:21:00 crc kubenswrapper[5136]: E0320 07:21:00.398123 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:11 crc kubenswrapper[5136]: I0320 07:21:11.397120 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:21:11 crc kubenswrapper[5136]: E0320 07:21:11.398027 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:23 crc kubenswrapper[5136]: I0320 07:21:23.396541 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:21:23 crc kubenswrapper[5136]: E0320 07:21:23.397256 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:33 crc kubenswrapper[5136]: I0320 07:21:33.025960 5136 scope.go:117] "RemoveContainer" containerID="cc5a54a6935dd6e523205b586479d84179624ba24df417c663b90589e6d2673f" Mar 20 07:21:33 crc kubenswrapper[5136]: I0320 07:21:33.058848 5136 scope.go:117] "RemoveContainer" containerID="a35e3106c44fc687668b1f5ba46d5a5060fdc5acc5e49f69f4dc88d5ef142f17" Mar 20 07:21:33 crc kubenswrapper[5136]: I0320 07:21:33.127194 5136 scope.go:117] "RemoveContainer" containerID="ccb4f9c0c6dc989c486c61d0a17af6a9e3438c25ae843380545c453141823051" Mar 20 07:21:37 crc kubenswrapper[5136]: I0320 07:21:37.397480 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:21:37 crc kubenswrapper[5136]: E0320 07:21:37.398410 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.837395 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:41 crc kubenswrapper[5136]: E0320 07:21:41.838635 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1114e255-4c25-4a30-88fb-4393c90a6d27" containerName="oc" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.838650 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1114e255-4c25-4a30-88fb-4393c90a6d27" containerName="oc" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.840722 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1114e255-4c25-4a30-88fb-4393c90a6d27" containerName="oc" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.842414 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.859353 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.931913 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz7pp\" (UniqueName: \"kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.931969 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.932018 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.033753 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz7pp\" (UniqueName: \"kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.033838 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.033901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.034445 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.034679 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.066623 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz7pp\" (UniqueName: \"kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.180891 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.652402 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:43 crc kubenswrapper[5136]: I0320 07:21:43.214669 5136 generic.go:334] "Generic (PLEG): container finished" podID="f1234585-e4eb-4797-ae7f-037d1124570e" containerID="d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4" exitCode=0 Mar 20 07:21:43 crc kubenswrapper[5136]: I0320 07:21:43.214732 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerDied","Data":"d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4"} Mar 20 07:21:43 crc kubenswrapper[5136]: I0320 07:21:43.215007 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerStarted","Data":"bd8112d3153d61bdcad61a83c70d59f9a21ae4fb6be5303b783fda8fe212c04b"} Mar 20 07:21:43 crc kubenswrapper[5136]: I0320 07:21:43.216881 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.242325 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.244730 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.274585 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.365915 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.366395 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpz8\" (UniqueName: \"kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.366545 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.467401 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgpz8\" (UniqueName: \"kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.467768 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.467795 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.468306 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.468611 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.501495 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgpz8\" (UniqueName: \"kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.581863 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.071512 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:45 crc kubenswrapper[5136]: W0320 07:21:45.073210 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode64dddc6_3a07_405d_89ab_3c1a65fc7e40.slice/crio-50cee77aae038fb327618c5294d6b7d171fa7f846ace17f4fa9e1299940e9931 WatchSource:0}: Error finding container 50cee77aae038fb327618c5294d6b7d171fa7f846ace17f4fa9e1299940e9931: Status 404 returned error can't find the container with id 50cee77aae038fb327618c5294d6b7d171fa7f846ace17f4fa9e1299940e9931 Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.263910 5136 generic.go:334] "Generic (PLEG): container finished" podID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerID="3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771" exitCode=0 Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.263978 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerDied","Data":"3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771"} Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.266153 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerStarted","Data":"50cee77aae038fb327618c5294d6b7d171fa7f846ace17f4fa9e1299940e9931"} Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.268890 5136 generic.go:334] "Generic (PLEG): container finished" podID="f1234585-e4eb-4797-ae7f-037d1124570e" containerID="4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac" exitCode=0 Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.268928 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerDied","Data":"4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac"} Mar 20 07:21:46 crc kubenswrapper[5136]: I0320 07:21:46.279837 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerStarted","Data":"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d"} Mar 20 07:21:46 crc kubenswrapper[5136]: I0320 07:21:46.309966 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7sjkx" podStartSLOduration=2.829420921 podStartE2EDuration="5.309937959s" podCreationTimestamp="2026-03-20 07:21:41 +0000 UTC" firstStartedPulling="2026-03-20 07:21:43.216600625 +0000 UTC m=+1935.475911776" lastFinishedPulling="2026-03-20 07:21:45.697117653 +0000 UTC m=+1937.956428814" observedRunningTime="2026-03-20 07:21:46.300659492 +0000 UTC m=+1938.559970653" watchObservedRunningTime="2026-03-20 07:21:46.309937959 +0000 UTC m=+1938.569249150" Mar 20 07:21:47 crc kubenswrapper[5136]: I0320 07:21:47.288849 5136 generic.go:334] "Generic (PLEG): container finished" podID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerID="56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084" exitCode=0 Mar 20 07:21:47 crc kubenswrapper[5136]: I0320 07:21:47.288894 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerDied","Data":"56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084"} Mar 20 07:21:49 crc kubenswrapper[5136]: I0320 07:21:49.304068 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerStarted","Data":"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f"} Mar 20 07:21:49 crc kubenswrapper[5136]: I0320 07:21:49.323242 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsxr9" podStartSLOduration=1.5444437 podStartE2EDuration="5.323223137s" podCreationTimestamp="2026-03-20 07:21:44 +0000 UTC" firstStartedPulling="2026-03-20 07:21:45.265719149 +0000 UTC m=+1937.525030300" lastFinishedPulling="2026-03-20 07:21:49.044498586 +0000 UTC m=+1941.303809737" observedRunningTime="2026-03-20 07:21:49.320664348 +0000 UTC m=+1941.579975499" watchObservedRunningTime="2026-03-20 07:21:49.323223137 +0000 UTC m=+1941.582534298" Mar 20 07:21:51 crc kubenswrapper[5136]: I0320 07:21:51.397690 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:21:51 crc kubenswrapper[5136]: E0320 07:21:51.398291 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:52 crc kubenswrapper[5136]: I0320 07:21:52.182424 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:52 crc kubenswrapper[5136]: I0320 07:21:52.183089 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:52 crc kubenswrapper[5136]: I0320 07:21:52.238593 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:52 crc kubenswrapper[5136]: I0320 07:21:52.378392 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:54 crc kubenswrapper[5136]: I0320 07:21:54.583050 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:54 crc kubenswrapper[5136]: I0320 07:21:54.583146 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:54 crc kubenswrapper[5136]: I0320 07:21:54.634078 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:54 crc kubenswrapper[5136]: I0320 07:21:54.634477 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7sjkx" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="registry-server" containerID="cri-o://a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d" gracePeriod=2 Mar 20 07:21:54 crc kubenswrapper[5136]: I0320 07:21:54.639437 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.079345 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.228379 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz7pp\" (UniqueName: \"kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp\") pod \"f1234585-e4eb-4797-ae7f-037d1124570e\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.228508 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities\") pod \"f1234585-e4eb-4797-ae7f-037d1124570e\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.228555 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content\") pod \"f1234585-e4eb-4797-ae7f-037d1124570e\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.229801 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities" (OuterVolumeSpecName: "utilities") pod "f1234585-e4eb-4797-ae7f-037d1124570e" (UID: "f1234585-e4eb-4797-ae7f-037d1124570e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.236463 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp" (OuterVolumeSpecName: "kube-api-access-qz7pp") pod "f1234585-e4eb-4797-ae7f-037d1124570e" (UID: "f1234585-e4eb-4797-ae7f-037d1124570e"). InnerVolumeSpecName "kube-api-access-qz7pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.330751 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz7pp\" (UniqueName: \"kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.330793 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.355792 5136 generic.go:334] "Generic (PLEG): container finished" podID="f1234585-e4eb-4797-ae7f-037d1124570e" containerID="a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d" exitCode=0 Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.355846 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerDied","Data":"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d"} Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.355890 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.355901 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerDied","Data":"bd8112d3153d61bdcad61a83c70d59f9a21ae4fb6be5303b783fda8fe212c04b"} Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.355923 5136 scope.go:117] "RemoveContainer" containerID="a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.390371 5136 scope.go:117] "RemoveContainer" containerID="4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.395259 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.418137 5136 scope.go:117] "RemoveContainer" containerID="d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.453865 5136 scope.go:117] "RemoveContainer" containerID="a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d" Mar 20 07:21:55 crc kubenswrapper[5136]: E0320 07:21:55.454519 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d\": container with ID starting with a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d not found: ID does not exist" containerID="a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.454606 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d"} err="failed to get container status \"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d\": rpc error: code = NotFound desc = could not find container \"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d\": container with ID starting with a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d not found: ID does not exist" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.454636 5136 scope.go:117] "RemoveContainer" containerID="4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac" Mar 20 07:21:55 crc kubenswrapper[5136]: E0320 07:21:55.455007 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac\": container with ID starting with 4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac not found: ID does not exist" containerID="4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.455051 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac"} err="failed to get container status \"4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac\": rpc error: code = NotFound desc = could not find container \"4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac\": container with ID starting with 4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac not found: ID does not exist" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.455076 5136 scope.go:117] "RemoveContainer" containerID="d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4" Mar 20 07:21:55 crc kubenswrapper[5136]: E0320 07:21:55.455411 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4\": container with ID starting with d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4 not found: ID does not exist" containerID="d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.455448 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4"} err="failed to get container status \"d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4\": rpc error: code = NotFound desc = could not find container \"d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4\": container with ID starting with d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4 not found: ID does not exist" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.550801 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1234585-e4eb-4797-ae7f-037d1124570e" (UID: "f1234585-e4eb-4797-ae7f-037d1124570e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.639218 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.692225 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.700082 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:56 crc kubenswrapper[5136]: I0320 07:21:56.411723 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" path="/var/lib/kubelet/pods/f1234585-e4eb-4797-ae7f-037d1124570e/volumes" Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.226444 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.382301 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsxr9" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="registry-server" containerID="cri-o://97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f" gracePeriod=2 Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.789588 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.877443 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities\") pod \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.877511 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content\") pod \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.877551 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgpz8\" (UniqueName: \"kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8\") pod \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.879738 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities" (OuterVolumeSpecName: "utilities") pod "e64dddc6-3a07-405d-89ab-3c1a65fc7e40" (UID: "e64dddc6-3a07-405d-89ab-3c1a65fc7e40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.883876 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8" (OuterVolumeSpecName: "kube-api-access-rgpz8") pod "e64dddc6-3a07-405d-89ab-3c1a65fc7e40" (UID: "e64dddc6-3a07-405d-89ab-3c1a65fc7e40"). InnerVolumeSpecName "kube-api-access-rgpz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.978590 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.978624 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgpz8\" (UniqueName: \"kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.206566 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e64dddc6-3a07-405d-89ab-3c1a65fc7e40" (UID: "e64dddc6-3a07-405d-89ab-3c1a65fc7e40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.282382 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.393146 5136 generic.go:334] "Generic (PLEG): container finished" podID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerID="97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f" exitCode=0 Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.393218 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerDied","Data":"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f"} Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.393258 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerDied","Data":"50cee77aae038fb327618c5294d6b7d171fa7f846ace17f4fa9e1299940e9931"} Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.393288 5136 scope.go:117] "RemoveContainer" containerID="97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.393311 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.426672 5136 scope.go:117] "RemoveContainer" containerID="56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.456482 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.469065 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.473114 5136 scope.go:117] "RemoveContainer" containerID="3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.498617 5136 scope.go:117] "RemoveContainer" containerID="97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f" Mar 20 07:21:58 crc kubenswrapper[5136]: E0320 07:21:58.499268 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f\": container with ID starting with 97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f not found: ID does not exist" containerID="97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.499335 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f"} err="failed to get container status \"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f\": rpc error: code = NotFound desc = could not find container \"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f\": container with ID starting with 97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f not found: ID does not exist" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.499378 5136 scope.go:117] "RemoveContainer" containerID="56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084" Mar 20 07:21:58 crc kubenswrapper[5136]: E0320 07:21:58.499763 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084\": container with ID starting with 56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084 not found: ID does not exist" containerID="56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.499802 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084"} err="failed to get container status \"56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084\": rpc error: code = NotFound desc = could not find container \"56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084\": container with ID starting with 56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084 not found: ID does not exist" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.499849 5136 scope.go:117] "RemoveContainer" containerID="3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771" Mar 20 07:21:58 crc kubenswrapper[5136]: E0320 07:21:58.500454 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771\": container with ID starting with 3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771 not found: ID does not exist" containerID="3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.500478 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771"} err="failed to get container status \"3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771\": rpc error: code = NotFound desc = could not find container \"3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771\": container with ID starting with 3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771 not found: ID does not exist" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148056 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566522-6qksf"] Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148597 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="extract-utilities" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148609 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="extract-utilities" Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148619 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="extract-utilities" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148625 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="extract-utilities" Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148636 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148642 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148658 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148664 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148673 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="extract-content" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148680 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="extract-content" Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148694 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="extract-content" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148700 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="extract-content" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148849 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148868 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.149269 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.154024 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.155148 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.155190 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.164455 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566522-6qksf"] Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.214653 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45tp5\" (UniqueName: \"kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5\") pod \"auto-csr-approver-29566522-6qksf\" (UID: \"89e4d1fb-8e51-468f-877b-49847c583d53\") " pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.316338 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45tp5\" (UniqueName: \"kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5\") pod \"auto-csr-approver-29566522-6qksf\" (UID: \"89e4d1fb-8e51-468f-877b-49847c583d53\") " pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.337447 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45tp5\" (UniqueName: \"kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5\") pod \"auto-csr-approver-29566522-6qksf\" (UID: \"89e4d1fb-8e51-468f-877b-49847c583d53\") " pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.404663 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" path="/var/lib/kubelet/pods/e64dddc6-3a07-405d-89ab-3c1a65fc7e40/volumes" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.498447 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.894534 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566522-6qksf"] Mar 20 07:22:01 crc kubenswrapper[5136]: I0320 07:22:01.416632 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566522-6qksf" event={"ID":"89e4d1fb-8e51-468f-877b-49847c583d53","Type":"ContainerStarted","Data":"afd17f0acb5bffbad866324cbf61d969f311e4fa1136dc552f5b8b946c820641"} Mar 20 07:22:02 crc kubenswrapper[5136]: I0320 07:22:02.396595 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:22:02 crc kubenswrapper[5136]: E0320 07:22:02.397086 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:22:02 crc kubenswrapper[5136]: I0320 07:22:02.424427 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566522-6qksf" event={"ID":"89e4d1fb-8e51-468f-877b-49847c583d53","Type":"ContainerStarted","Data":"18ba546b64b0c89c0f6afc33df4ae25c09a3d7098c15cfd5a2ea63fa1ebde19e"} Mar 20 07:22:02 crc kubenswrapper[5136]: I0320 07:22:02.443684 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566522-6qksf" podStartSLOduration=1.239097502 podStartE2EDuration="2.443665742s" podCreationTimestamp="2026-03-20 07:22:00 +0000 UTC" firstStartedPulling="2026-03-20 07:22:00.904648757 +0000 UTC m=+1953.163959908" lastFinishedPulling="2026-03-20 07:22:02.109216997 +0000 UTC m=+1954.368528148" observedRunningTime="2026-03-20 07:22:02.4384291 +0000 UTC m=+1954.697740261" watchObservedRunningTime="2026-03-20 07:22:02.443665742 +0000 UTC m=+1954.702976913" Mar 20 07:22:03 crc kubenswrapper[5136]: I0320 07:22:03.433604 5136 generic.go:334] "Generic (PLEG): container finished" podID="89e4d1fb-8e51-468f-877b-49847c583d53" containerID="18ba546b64b0c89c0f6afc33df4ae25c09a3d7098c15cfd5a2ea63fa1ebde19e" exitCode=0 Mar 20 07:22:03 crc kubenswrapper[5136]: I0320 07:22:03.433684 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566522-6qksf" event={"ID":"89e4d1fb-8e51-468f-877b-49847c583d53","Type":"ContainerDied","Data":"18ba546b64b0c89c0f6afc33df4ae25c09a3d7098c15cfd5a2ea63fa1ebde19e"} Mar 20 07:22:04 crc kubenswrapper[5136]: I0320 07:22:04.727956 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:04 crc kubenswrapper[5136]: I0320 07:22:04.879800 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45tp5\" (UniqueName: \"kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5\") pod \"89e4d1fb-8e51-468f-877b-49847c583d53\" (UID: \"89e4d1fb-8e51-468f-877b-49847c583d53\") " Mar 20 07:22:04 crc kubenswrapper[5136]: I0320 07:22:04.889129 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5" (OuterVolumeSpecName: "kube-api-access-45tp5") pod "89e4d1fb-8e51-468f-877b-49847c583d53" (UID: "89e4d1fb-8e51-468f-877b-49847c583d53"). InnerVolumeSpecName "kube-api-access-45tp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:22:04 crc kubenswrapper[5136]: I0320 07:22:04.982121 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45tp5\" (UniqueName: \"kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5\") on node \"crc\" DevicePath \"\"" Mar 20 07:22:05 crc kubenswrapper[5136]: I0320 07:22:05.449364 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566522-6qksf" event={"ID":"89e4d1fb-8e51-468f-877b-49847c583d53","Type":"ContainerDied","Data":"afd17f0acb5bffbad866324cbf61d969f311e4fa1136dc552f5b8b946c820641"} Mar 20 07:22:05 crc kubenswrapper[5136]: I0320 07:22:05.449408 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd17f0acb5bffbad866324cbf61d969f311e4fa1136dc552f5b8b946c820641" Mar 20 07:22:05 crc kubenswrapper[5136]: I0320 07:22:05.449477 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:05 crc kubenswrapper[5136]: I0320 07:22:05.499698 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566516-2dnr7"] Mar 20 07:22:05 crc kubenswrapper[5136]: I0320 07:22:05.505175 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566516-2dnr7"] Mar 20 07:22:06 crc kubenswrapper[5136]: I0320 07:22:06.408761 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1410b1-69b7-42b6-85c9-967dbbc05b08" path="/var/lib/kubelet/pods/3a1410b1-69b7-42b6-85c9-967dbbc05b08/volumes" Mar 20 07:22:13 crc kubenswrapper[5136]: I0320 07:22:13.397394 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:22:13 crc kubenswrapper[5136]: E0320 07:22:13.398315 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:22:27 crc kubenswrapper[5136]: I0320 07:22:27.397019 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:22:27 crc kubenswrapper[5136]: E0320 07:22:27.398108 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:22:33 crc kubenswrapper[5136]: I0320 07:22:33.185028 5136 scope.go:117] "RemoveContainer" containerID="71a9b19bcf4bcf8c4a69410e7ffac0d108a4db9d76a7cd352479549f5c15e6f8" Mar 20 07:22:40 crc kubenswrapper[5136]: I0320 07:22:40.397286 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:22:40 crc kubenswrapper[5136]: E0320 07:22:40.398255 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:22:55 crc kubenswrapper[5136]: I0320 07:22:55.397320 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:22:55 crc kubenswrapper[5136]: E0320 07:22:55.398229 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:23:09 crc kubenswrapper[5136]: I0320 07:23:09.396526 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:23:09 crc kubenswrapper[5136]: E0320 07:23:09.397550 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:23:24 crc kubenswrapper[5136]: I0320 07:23:24.397021 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:23:24 crc kubenswrapper[5136]: E0320 07:23:24.397840 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:23:39 crc kubenswrapper[5136]: I0320 07:23:39.396540 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:23:39 crc kubenswrapper[5136]: E0320 07:23:39.398308 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:23:52 crc kubenswrapper[5136]: I0320 07:23:52.396726 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:23:52 crc kubenswrapper[5136]: E0320 07:23:52.398670 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.166232 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566524-rfjd7"] Mar 20 07:24:00 crc kubenswrapper[5136]: E0320 07:24:00.169135 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e4d1fb-8e51-468f-877b-49847c583d53" containerName="oc" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.169327 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e4d1fb-8e51-468f-877b-49847c583d53" containerName="oc" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.169977 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e4d1fb-8e51-468f-877b-49847c583d53" containerName="oc" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.171102 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.175162 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.177486 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.178383 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.186972 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566524-rfjd7"] Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.218611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzbz\" (UniqueName: \"kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz\") pod \"auto-csr-approver-29566524-rfjd7\" (UID: \"ef36bf3c-a18a-4fe4-829e-818ee309667e\") " pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.320056 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzbz\" (UniqueName: \"kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz\") pod \"auto-csr-approver-29566524-rfjd7\" (UID: \"ef36bf3c-a18a-4fe4-829e-818ee309667e\") " pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.340938 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzbz\" (UniqueName: \"kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz\") pod \"auto-csr-approver-29566524-rfjd7\" (UID: \"ef36bf3c-a18a-4fe4-829e-818ee309667e\") " pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.501483 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.969538 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566524-rfjd7"] Mar 20 07:24:01 crc kubenswrapper[5136]: I0320 07:24:01.394311 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" event={"ID":"ef36bf3c-a18a-4fe4-829e-818ee309667e","Type":"ContainerStarted","Data":"3ebd5abb20f2da2e439c8d5e0c0df6c7761fbc80623e5e312bdfd00850a1290e"} Mar 20 07:24:02 crc kubenswrapper[5136]: I0320 07:24:02.404586 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" event={"ID":"ef36bf3c-a18a-4fe4-829e-818ee309667e","Type":"ContainerStarted","Data":"b0a37f409a618d7e4e5c53b904c0bfbe0cfe72b34fedd22d7f0f23af1ad97d50"} Mar 20 07:24:02 crc kubenswrapper[5136]: I0320 07:24:02.426988 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" podStartSLOduration=1.327750457 podStartE2EDuration="2.426961698s" podCreationTimestamp="2026-03-20 07:24:00 +0000 UTC" firstStartedPulling="2026-03-20 07:24:00.985652115 +0000 UTC m=+2073.244963276" lastFinishedPulling="2026-03-20 07:24:02.084863336 +0000 UTC m=+2074.344174517" observedRunningTime="2026-03-20 07:24:02.415116431 +0000 UTC m=+2074.674427582" watchObservedRunningTime="2026-03-20 07:24:02.426961698 +0000 UTC m=+2074.686272879" Mar 20 07:24:03 crc kubenswrapper[5136]: I0320 07:24:03.413740 5136 generic.go:334] "Generic (PLEG): container finished" podID="ef36bf3c-a18a-4fe4-829e-818ee309667e" containerID="b0a37f409a618d7e4e5c53b904c0bfbe0cfe72b34fedd22d7f0f23af1ad97d50" exitCode=0 Mar 20 07:24:03 crc kubenswrapper[5136]: I0320 07:24:03.413801 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" event={"ID":"ef36bf3c-a18a-4fe4-829e-818ee309667e","Type":"ContainerDied","Data":"b0a37f409a618d7e4e5c53b904c0bfbe0cfe72b34fedd22d7f0f23af1ad97d50"} Mar 20 07:24:04 crc kubenswrapper[5136]: I0320 07:24:04.743423 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:04 crc kubenswrapper[5136]: I0320 07:24:04.888445 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrzbz\" (UniqueName: \"kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz\") pod \"ef36bf3c-a18a-4fe4-829e-818ee309667e\" (UID: \"ef36bf3c-a18a-4fe4-829e-818ee309667e\") " Mar 20 07:24:04 crc kubenswrapper[5136]: I0320 07:24:04.894551 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz" (OuterVolumeSpecName: "kube-api-access-mrzbz") pod "ef36bf3c-a18a-4fe4-829e-818ee309667e" (UID: "ef36bf3c-a18a-4fe4-829e-818ee309667e"). InnerVolumeSpecName "kube-api-access-mrzbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:24:04 crc kubenswrapper[5136]: I0320 07:24:04.990249 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrzbz\" (UniqueName: \"kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz\") on node \"crc\" DevicePath \"\"" Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.396389 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:24:05 crc kubenswrapper[5136]: E0320 07:24:05.397365 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.435542 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" event={"ID":"ef36bf3c-a18a-4fe4-829e-818ee309667e","Type":"ContainerDied","Data":"3ebd5abb20f2da2e439c8d5e0c0df6c7761fbc80623e5e312bdfd00850a1290e"} Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.435798 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ebd5abb20f2da2e439c8d5e0c0df6c7761fbc80623e5e312bdfd00850a1290e" Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.435642 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.498141 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566518-mjsfh"] Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.503943 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566518-mjsfh"] Mar 20 07:24:06 crc kubenswrapper[5136]: I0320 07:24:06.405729 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e858127-6d5f-4dcd-828c-a6f7b892c4dc" path="/var/lib/kubelet/pods/6e858127-6d5f-4dcd-828c-a6f7b892c4dc/volumes" Mar 20 07:24:16 crc kubenswrapper[5136]: I0320 07:24:16.396647 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:24:17 crc kubenswrapper[5136]: I0320 07:24:17.540500 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6"} Mar 20 07:24:33 crc kubenswrapper[5136]: I0320 07:24:33.296272 5136 scope.go:117] "RemoveContainer" containerID="7af0b7f0c5b3f60705910e6fc269402a40ca078da17abc7ff26594b5a890f02e" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.057519 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:00 crc kubenswrapper[5136]: E0320 07:25:00.058316 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef36bf3c-a18a-4fe4-829e-818ee309667e" containerName="oc" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.058328 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef36bf3c-a18a-4fe4-829e-818ee309667e" containerName="oc" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.058472 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef36bf3c-a18a-4fe4-829e-818ee309667e" containerName="oc" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.059604 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.074798 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.118900 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzvk9\" (UniqueName: \"kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.118979 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.119052 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.220296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzvk9\" (UniqueName: \"kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.220363 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.220400 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.220987 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.221011 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.238442 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzvk9\" (UniqueName: \"kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.400015 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.852457 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:00 crc kubenswrapper[5136]: W0320 07:25:00.858708 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod533b717e_2ea8_4f17_85b0_7520f8318f19.slice/crio-f7f9bbe52eae67fc1c5b09090364785f468eb6750d5bb39f5585fb1093ff82be WatchSource:0}: Error finding container f7f9bbe52eae67fc1c5b09090364785f468eb6750d5bb39f5585fb1093ff82be: Status 404 returned error can't find the container with id f7f9bbe52eae67fc1c5b09090364785f468eb6750d5bb39f5585fb1093ff82be Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.924359 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerStarted","Data":"f7f9bbe52eae67fc1c5b09090364785f468eb6750d5bb39f5585fb1093ff82be"} Mar 20 07:25:01 crc kubenswrapper[5136]: I0320 07:25:01.934110 5136 generic.go:334] "Generic (PLEG): container finished" podID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerID="28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6" exitCode=0 Mar 20 07:25:01 crc kubenswrapper[5136]: I0320 07:25:01.934147 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerDied","Data":"28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6"} Mar 20 07:25:03 crc kubenswrapper[5136]: I0320 07:25:03.948116 5136 generic.go:334] "Generic (PLEG): container finished" podID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerID="2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8" exitCode=0 Mar 20 07:25:03 crc kubenswrapper[5136]: I0320 07:25:03.948188 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerDied","Data":"2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8"} Mar 20 07:25:04 crc kubenswrapper[5136]: I0320 07:25:04.957674 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerStarted","Data":"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5"} Mar 20 07:25:04 crc kubenswrapper[5136]: I0320 07:25:04.979785 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-98d26" podStartSLOduration=2.533306111 podStartE2EDuration="4.979768406s" podCreationTimestamp="2026-03-20 07:25:00 +0000 UTC" firstStartedPulling="2026-03-20 07:25:01.937399628 +0000 UTC m=+2134.196710769" lastFinishedPulling="2026-03-20 07:25:04.383861883 +0000 UTC m=+2136.643173064" observedRunningTime="2026-03-20 07:25:04.972163091 +0000 UTC m=+2137.231474232" watchObservedRunningTime="2026-03-20 07:25:04.979768406 +0000 UTC m=+2137.239079557" Mar 20 07:25:10 crc kubenswrapper[5136]: I0320 07:25:10.405847 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:10 crc kubenswrapper[5136]: I0320 07:25:10.406377 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:11 crc kubenswrapper[5136]: I0320 07:25:11.462725 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-98d26" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="registry-server" probeResult="failure" output=< Mar 20 07:25:11 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:25:11 crc kubenswrapper[5136]: > Mar 20 07:25:20 crc kubenswrapper[5136]: I0320 07:25:20.447187 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:20 crc kubenswrapper[5136]: I0320 07:25:20.496997 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:20 crc kubenswrapper[5136]: I0320 07:25:20.695907 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.084501 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-98d26" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="registry-server" containerID="cri-o://e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5" gracePeriod=2 Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.523619 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.658673 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities\") pod \"533b717e-2ea8-4f17-85b0-7520f8318f19\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.658784 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzvk9\" (UniqueName: \"kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9\") pod \"533b717e-2ea8-4f17-85b0-7520f8318f19\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.658837 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content\") pod \"533b717e-2ea8-4f17-85b0-7520f8318f19\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.659984 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities" (OuterVolumeSpecName: "utilities") pod "533b717e-2ea8-4f17-85b0-7520f8318f19" (UID: "533b717e-2ea8-4f17-85b0-7520f8318f19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.673116 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9" (OuterVolumeSpecName: "kube-api-access-dzvk9") pod "533b717e-2ea8-4f17-85b0-7520f8318f19" (UID: "533b717e-2ea8-4f17-85b0-7520f8318f19"). InnerVolumeSpecName "kube-api-access-dzvk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.760858 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzvk9\" (UniqueName: \"kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9\") on node \"crc\" DevicePath \"\"" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.760885 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.803128 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "533b717e-2ea8-4f17-85b0-7520f8318f19" (UID: "533b717e-2ea8-4f17-85b0-7520f8318f19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.861642 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.093041 5136 generic.go:334] "Generic (PLEG): container finished" podID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerID="e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5" exitCode=0 Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.093081 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.093086 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerDied","Data":"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5"} Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.093137 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerDied","Data":"f7f9bbe52eae67fc1c5b09090364785f468eb6750d5bb39f5585fb1093ff82be"} Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.093153 5136 scope.go:117] "RemoveContainer" containerID="e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.129612 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.131969 5136 scope.go:117] "RemoveContainer" containerID="2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.147662 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.163493 5136 scope.go:117] "RemoveContainer" containerID="28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.193130 5136 scope.go:117] "RemoveContainer" containerID="e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5" Mar 20 07:25:23 crc kubenswrapper[5136]: E0320 07:25:23.193835 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5\": container with ID starting with e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5 not found: ID does not exist" containerID="e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.193888 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5"} err="failed to get container status \"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5\": rpc error: code = NotFound desc = could not find container \"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5\": container with ID starting with e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5 not found: ID does not exist" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.193922 5136 scope.go:117] "RemoveContainer" containerID="2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8" Mar 20 07:25:23 crc kubenswrapper[5136]: E0320 07:25:23.194435 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8\": container with ID starting with 2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8 not found: ID does not exist" containerID="2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.194474 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8"} err="failed to get container status \"2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8\": rpc error: code = NotFound desc = could not find container \"2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8\": container with ID starting with 2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8 not found: ID does not exist" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.194500 5136 scope.go:117] "RemoveContainer" containerID="28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6" Mar 20 07:25:23 crc kubenswrapper[5136]: E0320 07:25:23.195038 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6\": container with ID starting with 28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6 not found: ID does not exist" containerID="28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.195112 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6"} err="failed to get container status \"28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6\": rpc error: code = NotFound desc = could not find container \"28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6\": container with ID starting with 28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6 not found: ID does not exist" Mar 20 07:25:24 crc kubenswrapper[5136]: I0320 07:25:24.406201 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" path="/var/lib/kubelet/pods/533b717e-2ea8-4f17-85b0-7520f8318f19/volumes" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.177114 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566526-qp2cz"] Mar 20 07:26:00 crc kubenswrapper[5136]: E0320 07:26:00.178456 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="extract-content" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.178477 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="extract-content" Mar 20 07:26:00 crc kubenswrapper[5136]: E0320 07:26:00.178497 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="extract-utilities" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.178510 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="extract-utilities" Mar 20 07:26:00 crc kubenswrapper[5136]: E0320 07:26:00.178543 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="registry-server" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.178557 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="registry-server" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.178774 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="registry-server" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.179460 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.183520 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.183578 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.183887 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.188774 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566526-qp2cz"] Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.327349 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjlx\" (UniqueName: \"kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx\") pod \"auto-csr-approver-29566526-qp2cz\" (UID: \"198ab1b0-b88b-4a70-aae0-650c78826519\") " pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.428771 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjlx\" (UniqueName: \"kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx\") pod \"auto-csr-approver-29566526-qp2cz\" (UID: \"198ab1b0-b88b-4a70-aae0-650c78826519\") " pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.461734 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjlx\" (UniqueName: \"kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx\") pod \"auto-csr-approver-29566526-qp2cz\" (UID: \"198ab1b0-b88b-4a70-aae0-650c78826519\") " pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.512855 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.995980 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566526-qp2cz"] Mar 20 07:26:01 crc kubenswrapper[5136]: I0320 07:26:01.426222 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" event={"ID":"198ab1b0-b88b-4a70-aae0-650c78826519","Type":"ContainerStarted","Data":"194d752cbc87284ddf7365784184b26d6f665f22279e41cd0d0f5ea006c35f32"} Mar 20 07:26:03 crc kubenswrapper[5136]: I0320 07:26:03.456872 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" event={"ID":"198ab1b0-b88b-4a70-aae0-650c78826519","Type":"ContainerStarted","Data":"e9b25b5fd15a592a38a8ef02f8f0787d7aae0d22dff9f222f8354b7975e0efb4"} Mar 20 07:26:03 crc kubenswrapper[5136]: I0320 07:26:03.476923 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" podStartSLOduration=1.50593121 podStartE2EDuration="3.476902707s" podCreationTimestamp="2026-03-20 07:26:00 +0000 UTC" firstStartedPulling="2026-03-20 07:26:01.00707332 +0000 UTC m=+2193.266384511" lastFinishedPulling="2026-03-20 07:26:02.978044847 +0000 UTC m=+2195.237356008" observedRunningTime="2026-03-20 07:26:03.473541524 +0000 UTC m=+2195.732852675" watchObservedRunningTime="2026-03-20 07:26:03.476902707 +0000 UTC m=+2195.736213878" Mar 20 07:26:04 crc kubenswrapper[5136]: I0320 07:26:04.468012 5136 generic.go:334] "Generic (PLEG): container finished" podID="198ab1b0-b88b-4a70-aae0-650c78826519" containerID="e9b25b5fd15a592a38a8ef02f8f0787d7aae0d22dff9f222f8354b7975e0efb4" exitCode=0 Mar 20 07:26:04 crc kubenswrapper[5136]: I0320 07:26:04.468051 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" event={"ID":"198ab1b0-b88b-4a70-aae0-650c78826519","Type":"ContainerDied","Data":"e9b25b5fd15a592a38a8ef02f8f0787d7aae0d22dff9f222f8354b7975e0efb4"} Mar 20 07:26:05 crc kubenswrapper[5136]: I0320 07:26:05.833686 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.032904 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxjlx\" (UniqueName: \"kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx\") pod \"198ab1b0-b88b-4a70-aae0-650c78826519\" (UID: \"198ab1b0-b88b-4a70-aae0-650c78826519\") " Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.041923 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx" (OuterVolumeSpecName: "kube-api-access-fxjlx") pod "198ab1b0-b88b-4a70-aae0-650c78826519" (UID: "198ab1b0-b88b-4a70-aae0-650c78826519"). InnerVolumeSpecName "kube-api-access-fxjlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.134834 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxjlx\" (UniqueName: \"kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx\") on node \"crc\" DevicePath \"\"" Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.484708 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" event={"ID":"198ab1b0-b88b-4a70-aae0-650c78826519","Type":"ContainerDied","Data":"194d752cbc87284ddf7365784184b26d6f665f22279e41cd0d0f5ea006c35f32"} Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.484752 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.484778 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="194d752cbc87284ddf7365784184b26d6f665f22279e41cd0d0f5ea006c35f32" Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.538524 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566520-gp87b"] Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.546673 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566520-gp87b"] Mar 20 07:26:08 crc kubenswrapper[5136]: I0320 07:26:08.406789 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1114e255-4c25-4a30-88fb-4393c90a6d27" path="/var/lib/kubelet/pods/1114e255-4c25-4a30-88fb-4393c90a6d27/volumes" Mar 20 07:26:33 crc kubenswrapper[5136]: I0320 07:26:33.397132 5136 scope.go:117] "RemoveContainer" containerID="cbc3d1a89274343d759e3c647c542017f95e292a6f7b2eb7b7c31cedebd75f6f" Mar 20 07:26:45 crc kubenswrapper[5136]: I0320 07:26:45.821881 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:26:45 crc kubenswrapper[5136]: I0320 07:26:45.822646 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:27:03 crc kubenswrapper[5136]: I0320 07:27:03.860923 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:03 crc kubenswrapper[5136]: E0320 07:27:03.862025 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198ab1b0-b88b-4a70-aae0-650c78826519" containerName="oc" Mar 20 07:27:03 crc kubenswrapper[5136]: I0320 07:27:03.862051 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="198ab1b0-b88b-4a70-aae0-650c78826519" containerName="oc" Mar 20 07:27:03 crc kubenswrapper[5136]: I0320 07:27:03.862311 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="198ab1b0-b88b-4a70-aae0-650c78826519" containerName="oc" Mar 20 07:27:03 crc kubenswrapper[5136]: I0320 07:27:03.864139 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:03 crc kubenswrapper[5136]: I0320 07:27:03.879096 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.001531 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.001611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.001684 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv8nv\" (UniqueName: \"kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.103213 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.103260 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.103292 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv8nv\" (UniqueName: \"kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.103732 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.103878 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.121624 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv8nv\" (UniqueName: \"kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.199079 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.648702 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.987713 5136 generic.go:334] "Generic (PLEG): container finished" podID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerID="2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d" exitCode=0 Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.987751 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerDied","Data":"2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d"} Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.987778 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerStarted","Data":"8c8a7c0d79ff4de02e1fafa971deb807b56f56fea37e588456a8b2dd66558e0d"} Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.989899 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:27:05 crc kubenswrapper[5136]: I0320 07:27:05.998195 5136 generic.go:334] "Generic (PLEG): container finished" podID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerID="3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f" exitCode=0 Mar 20 07:27:05 crc kubenswrapper[5136]: I0320 07:27:05.998478 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerDied","Data":"3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f"} Mar 20 07:27:07 crc kubenswrapper[5136]: I0320 07:27:07.006574 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerStarted","Data":"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c"} Mar 20 07:27:07 crc kubenswrapper[5136]: I0320 07:27:07.026148 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6qtd9" podStartSLOduration=2.401815712 podStartE2EDuration="4.026123127s" podCreationTimestamp="2026-03-20 07:27:03 +0000 UTC" firstStartedPulling="2026-03-20 07:27:04.989692073 +0000 UTC m=+2257.249003224" lastFinishedPulling="2026-03-20 07:27:06.613999488 +0000 UTC m=+2258.873310639" observedRunningTime="2026-03-20 07:27:07.024715823 +0000 UTC m=+2259.284027054" watchObservedRunningTime="2026-03-20 07:27:07.026123127 +0000 UTC m=+2259.285434318" Mar 20 07:27:14 crc kubenswrapper[5136]: I0320 07:27:14.200206 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:14 crc kubenswrapper[5136]: I0320 07:27:14.200724 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:14 crc kubenswrapper[5136]: I0320 07:27:14.249565 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:15 crc kubenswrapper[5136]: I0320 07:27:15.106447 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:15 crc kubenswrapper[5136]: I0320 07:27:15.184042 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:15 crc kubenswrapper[5136]: I0320 07:27:15.822267 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:27:15 crc kubenswrapper[5136]: I0320 07:27:15.822352 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.079789 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6qtd9" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="registry-server" containerID="cri-o://c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c" gracePeriod=2 Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.615970 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.696244 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv8nv\" (UniqueName: \"kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv\") pod \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.696296 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content\") pod \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.696372 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities\") pod \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.697178 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities" (OuterVolumeSpecName: "utilities") pod "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" (UID: "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.701798 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv" (OuterVolumeSpecName: "kube-api-access-jv8nv") pod "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" (UID: "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7"). InnerVolumeSpecName "kube-api-access-jv8nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.742871 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" (UID: "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.798356 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.798398 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv8nv\" (UniqueName: \"kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv\") on node \"crc\" DevicePath \"\"" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.798417 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.089385 5136 generic.go:334] "Generic (PLEG): container finished" podID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerID="c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c" exitCode=0 Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.089456 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerDied","Data":"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c"} Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.089492 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.089504 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerDied","Data":"8c8a7c0d79ff4de02e1fafa971deb807b56f56fea37e588456a8b2dd66558e0d"} Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.089539 5136 scope.go:117] "RemoveContainer" containerID="c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.120069 5136 scope.go:117] "RemoveContainer" containerID="3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.135253 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.141487 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.149880 5136 scope.go:117] "RemoveContainer" containerID="2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.181404 5136 scope.go:117] "RemoveContainer" containerID="c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c" Mar 20 07:27:18 crc kubenswrapper[5136]: E0320 07:27:18.181967 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c\": container with ID starting with c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c not found: ID does not exist" containerID="c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.182008 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c"} err="failed to get container status \"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c\": rpc error: code = NotFound desc = could not find container \"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c\": container with ID starting with c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c not found: ID does not exist" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.182050 5136 scope.go:117] "RemoveContainer" containerID="3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f" Mar 20 07:27:18 crc kubenswrapper[5136]: E0320 07:27:18.182314 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f\": container with ID starting with 3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f not found: ID does not exist" containerID="3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.182351 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f"} err="failed to get container status \"3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f\": rpc error: code = NotFound desc = could not find container \"3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f\": container with ID starting with 3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f not found: ID does not exist" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.182377 5136 scope.go:117] "RemoveContainer" containerID="2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d" Mar 20 07:27:18 crc kubenswrapper[5136]: E0320 07:27:18.182725 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d\": container with ID starting with 2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d not found: ID does not exist" containerID="2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.182745 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d"} err="failed to get container status \"2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d\": rpc error: code = NotFound desc = could not find container \"2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d\": container with ID starting with 2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d not found: ID does not exist" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.407861 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" path="/var/lib/kubelet/pods/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7/volumes" Mar 20 07:27:45 crc kubenswrapper[5136]: I0320 07:27:45.822674 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:27:45 crc kubenswrapper[5136]: I0320 07:27:45.823473 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:27:45 crc kubenswrapper[5136]: I0320 07:27:45.823549 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:27:45 crc kubenswrapper[5136]: I0320 07:27:45.824428 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:27:45 crc kubenswrapper[5136]: I0320 07:27:45.824512 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6" gracePeriod=600 Mar 20 07:27:46 crc kubenswrapper[5136]: I0320 07:27:46.328788 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6" exitCode=0 Mar 20 07:27:46 crc kubenswrapper[5136]: I0320 07:27:46.328939 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6"} Mar 20 07:27:46 crc kubenswrapper[5136]: I0320 07:27:46.329087 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365"} Mar 20 07:27:46 crc kubenswrapper[5136]: I0320 07:27:46.329142 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.161665 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566528-p9gfh"] Mar 20 07:28:00 crc kubenswrapper[5136]: E0320 07:28:00.162921 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="extract-content" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.162935 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="extract-content" Mar 20 07:28:00 crc kubenswrapper[5136]: E0320 07:28:00.162965 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="extract-utilities" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.162972 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="extract-utilities" Mar 20 07:28:00 crc kubenswrapper[5136]: E0320 07:28:00.162990 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="registry-server" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.162996 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="registry-server" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.163136 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="registry-server" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.163660 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.171483 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.171571 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.173250 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.191078 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566528-p9gfh"] Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.314534 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjjjj\" (UniqueName: \"kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj\") pod \"auto-csr-approver-29566528-p9gfh\" (UID: \"0663cd7c-704c-4495-8271-f55538649003\") " pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.416688 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjjjj\" (UniqueName: \"kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj\") pod \"auto-csr-approver-29566528-p9gfh\" (UID: \"0663cd7c-704c-4495-8271-f55538649003\") " pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.443461 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjjjj\" (UniqueName: \"kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj\") pod \"auto-csr-approver-29566528-p9gfh\" (UID: \"0663cd7c-704c-4495-8271-f55538649003\") " pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.494854 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.963860 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566528-p9gfh"] Mar 20 07:28:01 crc kubenswrapper[5136]: I0320 07:28:01.472139 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" event={"ID":"0663cd7c-704c-4495-8271-f55538649003","Type":"ContainerStarted","Data":"fe6d42779a583ae94fdb31de8372f662e91aa9a0f7aa882296d25fd0dc014236"} Mar 20 07:28:04 crc kubenswrapper[5136]: I0320 07:28:04.493395 5136 generic.go:334] "Generic (PLEG): container finished" podID="0663cd7c-704c-4495-8271-f55538649003" containerID="8e7514aba4ea3d84ec9496fc84994ced79208352205e65a08cbf2bd32660e7b5" exitCode=0 Mar 20 07:28:04 crc kubenswrapper[5136]: I0320 07:28:04.493465 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" event={"ID":"0663cd7c-704c-4495-8271-f55538649003","Type":"ContainerDied","Data":"8e7514aba4ea3d84ec9496fc84994ced79208352205e65a08cbf2bd32660e7b5"} Mar 20 07:28:05 crc kubenswrapper[5136]: I0320 07:28:05.802763 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.000169 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjjjj\" (UniqueName: \"kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj\") pod \"0663cd7c-704c-4495-8271-f55538649003\" (UID: \"0663cd7c-704c-4495-8271-f55538649003\") " Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.008271 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj" (OuterVolumeSpecName: "kube-api-access-gjjjj") pod "0663cd7c-704c-4495-8271-f55538649003" (UID: "0663cd7c-704c-4495-8271-f55538649003"). InnerVolumeSpecName "kube-api-access-gjjjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.102317 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjjjj\" (UniqueName: \"kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj\") on node \"crc\" DevicePath \"\"" Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.511062 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" event={"ID":"0663cd7c-704c-4495-8271-f55538649003","Type":"ContainerDied","Data":"fe6d42779a583ae94fdb31de8372f662e91aa9a0f7aa882296d25fd0dc014236"} Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.511107 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe6d42779a583ae94fdb31de8372f662e91aa9a0f7aa882296d25fd0dc014236" Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.511171 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.882698 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566522-6qksf"] Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.888935 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566522-6qksf"] Mar 20 07:28:08 crc kubenswrapper[5136]: I0320 07:28:08.408449 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e4d1fb-8e51-468f-877b-49847c583d53" path="/var/lib/kubelet/pods/89e4d1fb-8e51-468f-877b-49847c583d53/volumes" Mar 20 07:28:33 crc kubenswrapper[5136]: I0320 07:28:33.528499 5136 scope.go:117] "RemoveContainer" containerID="18ba546b64b0c89c0f6afc33df4ae25c09a3d7098c15cfd5a2ea63fa1ebde19e" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.146327 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566530-wht58"] Mar 20 07:30:00 crc kubenswrapper[5136]: E0320 07:30:00.147370 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0663cd7c-704c-4495-8271-f55538649003" containerName="oc" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.147395 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0663cd7c-704c-4495-8271-f55538649003" containerName="oc" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.147678 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0663cd7c-704c-4495-8271-f55538649003" containerName="oc" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.148367 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.152838 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.153098 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.153333 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.161754 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566530-wht58"] Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.176020 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq77s\" (UniqueName: \"kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s\") pod \"auto-csr-approver-29566530-wht58\" (UID: \"2d0faa53-8471-40c0-a2ed-ef66d5b66e72\") " pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.191697 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz"] Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.193134 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.196783 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.196838 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.205078 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz"] Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.277189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.277234 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.277273 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg4kz\" (UniqueName: \"kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.277296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq77s\" (UniqueName: \"kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s\") pod \"auto-csr-approver-29566530-wht58\" (UID: \"2d0faa53-8471-40c0-a2ed-ef66d5b66e72\") " pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.300950 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq77s\" (UniqueName: \"kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s\") pod \"auto-csr-approver-29566530-wht58\" (UID: \"2d0faa53-8471-40c0-a2ed-ef66d5b66e72\") " pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.379196 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.379696 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.379864 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg4kz\" (UniqueName: \"kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.380491 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.384570 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.403844 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg4kz\" (UniqueName: \"kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.479663 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.527246 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.799478 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz"] Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.932687 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566530-wht58"] Mar 20 07:30:01 crc kubenswrapper[5136]: I0320 07:30:01.433904 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566530-wht58" event={"ID":"2d0faa53-8471-40c0-a2ed-ef66d5b66e72","Type":"ContainerStarted","Data":"8f725118d8b349ca7d53012e9026199e683ebf9bcdef4c5634f52e153f090d81"} Mar 20 07:30:01 crc kubenswrapper[5136]: I0320 07:30:01.435689 5136 generic.go:334] "Generic (PLEG): container finished" podID="d251ba65-cac2-4d94-b882-672d97a85bc7" containerID="ca492a12ee4dfef81804d9a43645add86ef8ab0ce16812e4c74a09d17ae0ea3c" exitCode=0 Mar 20 07:30:01 crc kubenswrapper[5136]: I0320 07:30:01.435735 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" event={"ID":"d251ba65-cac2-4d94-b882-672d97a85bc7","Type":"ContainerDied","Data":"ca492a12ee4dfef81804d9a43645add86ef8ab0ce16812e4c74a09d17ae0ea3c"} Mar 20 07:30:01 crc kubenswrapper[5136]: I0320 07:30:01.435797 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" event={"ID":"d251ba65-cac2-4d94-b882-672d97a85bc7","Type":"ContainerStarted","Data":"08250740f94fe396ced2c2691bc1e32d26db60f0f4e97fbd3f3257c9f6f4333a"} Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.753432 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.814103 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg4kz\" (UniqueName: \"kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz\") pod \"d251ba65-cac2-4d94-b882-672d97a85bc7\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.814168 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume\") pod \"d251ba65-cac2-4d94-b882-672d97a85bc7\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.814234 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume\") pod \"d251ba65-cac2-4d94-b882-672d97a85bc7\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.814926 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume" (OuterVolumeSpecName: "config-volume") pod "d251ba65-cac2-4d94-b882-672d97a85bc7" (UID: "d251ba65-cac2-4d94-b882-672d97a85bc7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.818748 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz" (OuterVolumeSpecName: "kube-api-access-xg4kz") pod "d251ba65-cac2-4d94-b882-672d97a85bc7" (UID: "d251ba65-cac2-4d94-b882-672d97a85bc7"). InnerVolumeSpecName "kube-api-access-xg4kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.820449 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d251ba65-cac2-4d94-b882-672d97a85bc7" (UID: "d251ba65-cac2-4d94-b882-672d97a85bc7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.916192 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.916477 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.916487 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg4kz\" (UniqueName: \"kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz\") on node \"crc\" DevicePath \"\"" Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.452605 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.452692 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" event={"ID":"d251ba65-cac2-4d94-b882-672d97a85bc7","Type":"ContainerDied","Data":"08250740f94fe396ced2c2691bc1e32d26db60f0f4e97fbd3f3257c9f6f4333a"} Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.452747 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08250740f94fe396ced2c2691bc1e32d26db60f0f4e97fbd3f3257c9f6f4333a" Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.454420 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566530-wht58" event={"ID":"2d0faa53-8471-40c0-a2ed-ef66d5b66e72","Type":"ContainerStarted","Data":"a0171e6422989bbfb70e3a76ced8595e932c612b43527acfa116a80d4b912265"} Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.478774 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566530-wht58" podStartSLOduration=1.398690712 podStartE2EDuration="3.478744629s" podCreationTimestamp="2026-03-20 07:30:00 +0000 UTC" firstStartedPulling="2026-03-20 07:30:00.939598572 +0000 UTC m=+2433.198909733" lastFinishedPulling="2026-03-20 07:30:03.019652489 +0000 UTC m=+2435.278963650" observedRunningTime="2026-03-20 07:30:03.470434703 +0000 UTC m=+2435.729745894" watchObservedRunningTime="2026-03-20 07:30:03.478744629 +0000 UTC m=+2435.738055820" Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.845898 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252"] Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.852799 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252"] Mar 20 07:30:04 crc kubenswrapper[5136]: I0320 07:30:04.409693 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" path="/var/lib/kubelet/pods/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8/volumes" Mar 20 07:30:04 crc kubenswrapper[5136]: I0320 07:30:04.486238 5136 generic.go:334] "Generic (PLEG): container finished" podID="2d0faa53-8471-40c0-a2ed-ef66d5b66e72" containerID="a0171e6422989bbfb70e3a76ced8595e932c612b43527acfa116a80d4b912265" exitCode=0 Mar 20 07:30:04 crc kubenswrapper[5136]: I0320 07:30:04.486327 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566530-wht58" event={"ID":"2d0faa53-8471-40c0-a2ed-ef66d5b66e72","Type":"ContainerDied","Data":"a0171e6422989bbfb70e3a76ced8595e932c612b43527acfa116a80d4b912265"} Mar 20 07:30:05 crc kubenswrapper[5136]: I0320 07:30:05.815327 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:05 crc kubenswrapper[5136]: I0320 07:30:05.885665 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq77s\" (UniqueName: \"kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s\") pod \"2d0faa53-8471-40c0-a2ed-ef66d5b66e72\" (UID: \"2d0faa53-8471-40c0-a2ed-ef66d5b66e72\") " Mar 20 07:30:05 crc kubenswrapper[5136]: I0320 07:30:05.893409 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s" (OuterVolumeSpecName: "kube-api-access-cq77s") pod "2d0faa53-8471-40c0-a2ed-ef66d5b66e72" (UID: "2d0faa53-8471-40c0-a2ed-ef66d5b66e72"). InnerVolumeSpecName "kube-api-access-cq77s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:30:05 crc kubenswrapper[5136]: I0320 07:30:05.988029 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq77s\" (UniqueName: \"kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s\") on node \"crc\" DevicePath \"\"" Mar 20 07:30:06 crc kubenswrapper[5136]: I0320 07:30:06.506323 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566530-wht58" event={"ID":"2d0faa53-8471-40c0-a2ed-ef66d5b66e72","Type":"ContainerDied","Data":"8f725118d8b349ca7d53012e9026199e683ebf9bcdef4c5634f52e153f090d81"} Mar 20 07:30:06 crc kubenswrapper[5136]: I0320 07:30:06.506405 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f725118d8b349ca7d53012e9026199e683ebf9bcdef4c5634f52e153f090d81" Mar 20 07:30:06 crc kubenswrapper[5136]: I0320 07:30:06.506526 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:06 crc kubenswrapper[5136]: I0320 07:30:06.535842 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566524-rfjd7"] Mar 20 07:30:06 crc kubenswrapper[5136]: I0320 07:30:06.541676 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566524-rfjd7"] Mar 20 07:30:08 crc kubenswrapper[5136]: I0320 07:30:08.413337 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef36bf3c-a18a-4fe4-829e-818ee309667e" path="/var/lib/kubelet/pods/ef36bf3c-a18a-4fe4-829e-818ee309667e/volumes" Mar 20 07:30:15 crc kubenswrapper[5136]: I0320 07:30:15.822623 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:30:15 crc kubenswrapper[5136]: I0320 07:30:15.822966 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:30:33 crc kubenswrapper[5136]: I0320 07:30:33.630247 5136 scope.go:117] "RemoveContainer" containerID="b0a37f409a618d7e4e5c53b904c0bfbe0cfe72b34fedd22d7f0f23af1ad97d50" Mar 20 07:30:33 crc kubenswrapper[5136]: I0320 07:30:33.667887 5136 scope.go:117] "RemoveContainer" containerID="f960ca2f1291c6810939c91cb385274efd1428e9971de1fcce80d392e52b2a36" Mar 20 07:30:45 crc kubenswrapper[5136]: I0320 07:30:45.822739 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:30:45 crc kubenswrapper[5136]: I0320 07:30:45.823406 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:31:15 crc kubenswrapper[5136]: I0320 07:31:15.821917 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:31:15 crc kubenswrapper[5136]: I0320 07:31:15.822606 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:31:15 crc kubenswrapper[5136]: I0320 07:31:15.822659 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:31:15 crc kubenswrapper[5136]: I0320 07:31:15.823372 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:31:15 crc kubenswrapper[5136]: I0320 07:31:15.823441 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" gracePeriod=600 Mar 20 07:31:15 crc kubenswrapper[5136]: E0320 07:31:15.949980 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:31:16 crc kubenswrapper[5136]: I0320 07:31:16.121478 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" exitCode=0 Mar 20 07:31:16 crc kubenswrapper[5136]: I0320 07:31:16.121528 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365"} Mar 20 07:31:16 crc kubenswrapper[5136]: I0320 07:31:16.121566 5136 scope.go:117] "RemoveContainer" containerID="f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6" Mar 20 07:31:16 crc kubenswrapper[5136]: I0320 07:31:16.122249 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:31:16 crc kubenswrapper[5136]: E0320 07:31:16.122851 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:31:26 crc kubenswrapper[5136]: I0320 07:31:26.396768 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:31:26 crc kubenswrapper[5136]: E0320 07:31:26.397561 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:31:41 crc kubenswrapper[5136]: I0320 07:31:41.396369 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:31:41 crc kubenswrapper[5136]: E0320 07:31:41.397248 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:31:52 crc kubenswrapper[5136]: I0320 07:31:52.396701 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:31:52 crc kubenswrapper[5136]: E0320 07:31:52.397456 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.587183 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:31:59 crc kubenswrapper[5136]: E0320 07:31:59.588395 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d251ba65-cac2-4d94-b882-672d97a85bc7" containerName="collect-profiles" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.588426 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d251ba65-cac2-4d94-b882-672d97a85bc7" containerName="collect-profiles" Mar 20 07:31:59 crc kubenswrapper[5136]: E0320 07:31:59.588472 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d0faa53-8471-40c0-a2ed-ef66d5b66e72" containerName="oc" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.588493 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d0faa53-8471-40c0-a2ed-ef66d5b66e72" containerName="oc" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.588764 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d0faa53-8471-40c0-a2ed-ef66d5b66e72" containerName="oc" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.588808 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d251ba65-cac2-4d94-b882-672d97a85bc7" containerName="collect-profiles" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.590627 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.597215 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.666050 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.666104 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtvbp\" (UniqueName: \"kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.666287 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.767015 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.767091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.767124 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtvbp\" (UniqueName: \"kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.767655 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.767717 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.796435 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtvbp\" (UniqueName: \"kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.925773 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.139713 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566532-twtd9"] Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.146094 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.151184 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.151253 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.151451 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.156168 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566532-twtd9"] Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.274772 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzrxx\" (UniqueName: \"kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx\") pod \"auto-csr-approver-29566532-twtd9\" (UID: \"e4c27187-55d8-4db4-9cae-d77617300a14\") " pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.375689 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrxx\" (UniqueName: \"kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx\") pod \"auto-csr-approver-29566532-twtd9\" (UID: \"e4c27187-55d8-4db4-9cae-d77617300a14\") " pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.378730 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.400628 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrxx\" (UniqueName: \"kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx\") pod \"auto-csr-approver-29566532-twtd9\" (UID: \"e4c27187-55d8-4db4-9cae-d77617300a14\") " pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.466490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerStarted","Data":"f1db3e0f434391de48b8a5e136738f7ddfe0ad77069b404f19c69058e7dcef08"} Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.475955 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.675758 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566532-twtd9"] Mar 20 07:32:00 crc kubenswrapper[5136]: W0320 07:32:00.676443 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4c27187_55d8_4db4_9cae_d77617300a14.slice/crio-81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583 WatchSource:0}: Error finding container 81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583: Status 404 returned error can't find the container with id 81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583 Mar 20 07:32:01 crc kubenswrapper[5136]: I0320 07:32:01.478718 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566532-twtd9" event={"ID":"e4c27187-55d8-4db4-9cae-d77617300a14","Type":"ContainerStarted","Data":"81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583"} Mar 20 07:32:01 crc kubenswrapper[5136]: I0320 07:32:01.480747 5136 generic.go:334] "Generic (PLEG): container finished" podID="e6f69975-a243-4554-8864-968b28f34bb1" containerID="605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789" exitCode=0 Mar 20 07:32:01 crc kubenswrapper[5136]: I0320 07:32:01.480871 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerDied","Data":"605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789"} Mar 20 07:32:02 crc kubenswrapper[5136]: I0320 07:32:02.488493 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerStarted","Data":"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363"} Mar 20 07:32:02 crc kubenswrapper[5136]: I0320 07:32:02.490106 5136 generic.go:334] "Generic (PLEG): container finished" podID="e4c27187-55d8-4db4-9cae-d77617300a14" containerID="5c9dbddef3617b2a1a9f29b6615bed2e74b730bf03006a402fac0528653fa989" exitCode=0 Mar 20 07:32:02 crc kubenswrapper[5136]: I0320 07:32:02.490130 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566532-twtd9" event={"ID":"e4c27187-55d8-4db4-9cae-d77617300a14","Type":"ContainerDied","Data":"5c9dbddef3617b2a1a9f29b6615bed2e74b730bf03006a402fac0528653fa989"} Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.398038 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:32:03 crc kubenswrapper[5136]: E0320 07:32:03.398563 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.503996 5136 generic.go:334] "Generic (PLEG): container finished" podID="e6f69975-a243-4554-8864-968b28f34bb1" containerID="e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363" exitCode=0 Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.504084 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerDied","Data":"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363"} Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.880062 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.932152 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzrxx\" (UniqueName: \"kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx\") pod \"e4c27187-55d8-4db4-9cae-d77617300a14\" (UID: \"e4c27187-55d8-4db4-9cae-d77617300a14\") " Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.939084 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx" (OuterVolumeSpecName: "kube-api-access-xzrxx") pod "e4c27187-55d8-4db4-9cae-d77617300a14" (UID: "e4c27187-55d8-4db4-9cae-d77617300a14"). InnerVolumeSpecName "kube-api-access-xzrxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.033522 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzrxx\" (UniqueName: \"kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx\") on node \"crc\" DevicePath \"\"" Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.514159 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.514152 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566532-twtd9" event={"ID":"e4c27187-55d8-4db4-9cae-d77617300a14","Type":"ContainerDied","Data":"81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583"} Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.514614 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583" Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.516905 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerStarted","Data":"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e"} Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.548598 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tk65m" podStartSLOduration=2.833155654 podStartE2EDuration="5.548574426s" podCreationTimestamp="2026-03-20 07:31:59 +0000 UTC" firstStartedPulling="2026-03-20 07:32:01.482864103 +0000 UTC m=+2553.742175254" lastFinishedPulling="2026-03-20 07:32:04.198282835 +0000 UTC m=+2556.457594026" observedRunningTime="2026-03-20 07:32:04.539743385 +0000 UTC m=+2556.799054566" watchObservedRunningTime="2026-03-20 07:32:04.548574426 +0000 UTC m=+2556.807885607" Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.955322 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566526-qp2cz"] Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.966255 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566526-qp2cz"] Mar 20 07:32:06 crc kubenswrapper[5136]: I0320 07:32:06.407283 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="198ab1b0-b88b-4a70-aae0-650c78826519" path="/var/lib/kubelet/pods/198ab1b0-b88b-4a70-aae0-650c78826519/volumes" Mar 20 07:32:09 crc kubenswrapper[5136]: I0320 07:32:09.925955 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:09 crc kubenswrapper[5136]: I0320 07:32:09.926714 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:09 crc kubenswrapper[5136]: I0320 07:32:09.973612 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:10 crc kubenswrapper[5136]: I0320 07:32:10.603134 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:10 crc kubenswrapper[5136]: I0320 07:32:10.643108 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:32:12 crc kubenswrapper[5136]: I0320 07:32:12.583453 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tk65m" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="registry-server" containerID="cri-o://2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e" gracePeriod=2 Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.542504 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.593232 5136 generic.go:334] "Generic (PLEG): container finished" podID="e6f69975-a243-4554-8864-968b28f34bb1" containerID="2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e" exitCode=0 Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.593277 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerDied","Data":"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e"} Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.593289 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.593311 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerDied","Data":"f1db3e0f434391de48b8a5e136738f7ddfe0ad77069b404f19c69058e7dcef08"} Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.593331 5136 scope.go:117] "RemoveContainer" containerID="2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.617434 5136 scope.go:117] "RemoveContainer" containerID="e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.618773 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content\") pod \"e6f69975-a243-4554-8864-968b28f34bb1\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.618855 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtvbp\" (UniqueName: \"kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp\") pod \"e6f69975-a243-4554-8864-968b28f34bb1\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.618949 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities\") pod \"e6f69975-a243-4554-8864-968b28f34bb1\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.620225 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities" (OuterVolumeSpecName: "utilities") pod "e6f69975-a243-4554-8864-968b28f34bb1" (UID: "e6f69975-a243-4554-8864-968b28f34bb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.624034 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp" (OuterVolumeSpecName: "kube-api-access-dtvbp") pod "e6f69975-a243-4554-8864-968b28f34bb1" (UID: "e6f69975-a243-4554-8864-968b28f34bb1"). InnerVolumeSpecName "kube-api-access-dtvbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.634537 5136 scope.go:117] "RemoveContainer" containerID="605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.673435 5136 scope.go:117] "RemoveContainer" containerID="2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e" Mar 20 07:32:13 crc kubenswrapper[5136]: E0320 07:32:13.676475 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e\": container with ID starting with 2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e not found: ID does not exist" containerID="2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.676560 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e"} err="failed to get container status \"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e\": rpc error: code = NotFound desc = could not find container \"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e\": container with ID starting with 2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e not found: ID does not exist" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.676594 5136 scope.go:117] "RemoveContainer" containerID="e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363" Mar 20 07:32:13 crc kubenswrapper[5136]: E0320 07:32:13.676909 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363\": container with ID starting with e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363 not found: ID does not exist" containerID="e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.676952 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363"} err="failed to get container status \"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363\": rpc error: code = NotFound desc = could not find container \"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363\": container with ID starting with e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363 not found: ID does not exist" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.676977 5136 scope.go:117] "RemoveContainer" containerID="605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789" Mar 20 07:32:13 crc kubenswrapper[5136]: E0320 07:32:13.677265 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789\": container with ID starting with 605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789 not found: ID does not exist" containerID="605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.677299 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789"} err="failed to get container status \"605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789\": rpc error: code = NotFound desc = could not find container \"605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789\": container with ID starting with 605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789 not found: ID does not exist" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.682225 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6f69975-a243-4554-8864-968b28f34bb1" (UID: "e6f69975-a243-4554-8864-968b28f34bb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.720072 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtvbp\" (UniqueName: \"kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp\") on node \"crc\" DevicePath \"\"" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.720104 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.720116 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.938170 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.944005 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:32:14 crc kubenswrapper[5136]: I0320 07:32:14.407104 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6f69975-a243-4554-8864-968b28f34bb1" path="/var/lib/kubelet/pods/e6f69975-a243-4554-8864-968b28f34bb1/volumes" Mar 20 07:32:18 crc kubenswrapper[5136]: I0320 07:32:18.407081 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:32:18 crc kubenswrapper[5136]: E0320 07:32:18.407748 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:32:32 crc kubenswrapper[5136]: I0320 07:32:32.396671 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:32:32 crc kubenswrapper[5136]: E0320 07:32:32.397463 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:32:33 crc kubenswrapper[5136]: I0320 07:32:33.739399 5136 scope.go:117] "RemoveContainer" containerID="e9b25b5fd15a592a38a8ef02f8f0787d7aae0d22dff9f222f8354b7975e0efb4" Mar 20 07:32:44 crc kubenswrapper[5136]: I0320 07:32:44.399055 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:32:44 crc kubenswrapper[5136]: E0320 07:32:44.400538 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.751083 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:32:52 crc kubenswrapper[5136]: E0320 07:32:52.751827 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="registry-server" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.751841 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="registry-server" Mar 20 07:32:52 crc kubenswrapper[5136]: E0320 07:32:52.751855 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="extract-content" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.751863 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="extract-content" Mar 20 07:32:52 crc kubenswrapper[5136]: E0320 07:32:52.751876 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c27187-55d8-4db4-9cae-d77617300a14" containerName="oc" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.751884 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c27187-55d8-4db4-9cae-d77617300a14" containerName="oc" Mar 20 07:32:52 crc kubenswrapper[5136]: E0320 07:32:52.751907 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="extract-utilities" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.751915 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="extract-utilities" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.752068 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c27187-55d8-4db4-9cae-d77617300a14" containerName="oc" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.752090 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="registry-server" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.753205 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.762736 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.808284 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.808355 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.808407 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8499c\" (UniqueName: \"kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.909918 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.909995 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.910056 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8499c\" (UniqueName: \"kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.910588 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.910632 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.932158 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8499c\" (UniqueName: \"kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.073515 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.560096 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.923526 5136 generic.go:334] "Generic (PLEG): container finished" podID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerID="3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2" exitCode=0 Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.923598 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerDied","Data":"3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2"} Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.923639 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerStarted","Data":"f06afaedabec7e65e7e127cf6a160feaa8bfb209c8a10ddd6399781dab8be5fa"} Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.925316 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:32:54 crc kubenswrapper[5136]: I0320 07:32:54.931658 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerStarted","Data":"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618"} Mar 20 07:32:55 crc kubenswrapper[5136]: I0320 07:32:55.397173 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:32:55 crc kubenswrapper[5136]: E0320 07:32:55.397529 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:32:55 crc kubenswrapper[5136]: I0320 07:32:55.939594 5136 generic.go:334] "Generic (PLEG): container finished" podID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerID="67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618" exitCode=0 Mar 20 07:32:55 crc kubenswrapper[5136]: I0320 07:32:55.939650 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerDied","Data":"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618"} Mar 20 07:32:56 crc kubenswrapper[5136]: I0320 07:32:56.948623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerStarted","Data":"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584"} Mar 20 07:32:56 crc kubenswrapper[5136]: I0320 07:32:56.967944 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mhq7q" podStartSLOduration=2.5713867969999997 podStartE2EDuration="4.967928317s" podCreationTimestamp="2026-03-20 07:32:52 +0000 UTC" firstStartedPulling="2026-03-20 07:32:53.924676593 +0000 UTC m=+2606.183987784" lastFinishedPulling="2026-03-20 07:32:56.321218103 +0000 UTC m=+2608.580529304" observedRunningTime="2026-03-20 07:32:56.967225557 +0000 UTC m=+2609.226536708" watchObservedRunningTime="2026-03-20 07:32:56.967928317 +0000 UTC m=+2609.227239468" Mar 20 07:33:03 crc kubenswrapper[5136]: I0320 07:33:03.074329 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:03 crc kubenswrapper[5136]: I0320 07:33:03.074981 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:03 crc kubenswrapper[5136]: I0320 07:33:03.155299 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:04 crc kubenswrapper[5136]: I0320 07:33:04.079467 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:04 crc kubenswrapper[5136]: I0320 07:33:04.149727 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.023201 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mhq7q" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="registry-server" containerID="cri-o://ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584" gracePeriod=2 Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.468837 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.535634 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8499c\" (UniqueName: \"kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c\") pod \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.535889 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content\") pod \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.536164 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities\") pod \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.537193 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities" (OuterVolumeSpecName: "utilities") pod "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" (UID: "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.541330 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c" (OuterVolumeSpecName: "kube-api-access-8499c") pod "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" (UID: "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9"). InnerVolumeSpecName "kube-api-access-8499c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.602732 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" (UID: "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.638893 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8499c\" (UniqueName: \"kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c\") on node \"crc\" DevicePath \"\"" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.639206 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.639359 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.033952 5136 generic.go:334] "Generic (PLEG): container finished" podID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerID="ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584" exitCode=0 Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.034028 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.034022 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerDied","Data":"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584"} Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.035115 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerDied","Data":"f06afaedabec7e65e7e127cf6a160feaa8bfb209c8a10ddd6399781dab8be5fa"} Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.035186 5136 scope.go:117] "RemoveContainer" containerID="ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.061537 5136 scope.go:117] "RemoveContainer" containerID="67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.092762 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.102620 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.108504 5136 scope.go:117] "RemoveContainer" containerID="3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.138665 5136 scope.go:117] "RemoveContainer" containerID="ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584" Mar 20 07:33:07 crc kubenswrapper[5136]: E0320 07:33:07.139337 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584\": container with ID starting with ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584 not found: ID does not exist" containerID="ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.139476 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584"} err="failed to get container status \"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584\": rpc error: code = NotFound desc = could not find container \"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584\": container with ID starting with ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584 not found: ID does not exist" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.139658 5136 scope.go:117] "RemoveContainer" containerID="67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618" Mar 20 07:33:07 crc kubenswrapper[5136]: E0320 07:33:07.140089 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618\": container with ID starting with 67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618 not found: ID does not exist" containerID="67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.140117 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618"} err="failed to get container status \"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618\": rpc error: code = NotFound desc = could not find container \"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618\": container with ID starting with 67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618 not found: ID does not exist" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.140135 5136 scope.go:117] "RemoveContainer" containerID="3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2" Mar 20 07:33:07 crc kubenswrapper[5136]: E0320 07:33:07.140474 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2\": container with ID starting with 3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2 not found: ID does not exist" containerID="3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.140517 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2"} err="failed to get container status \"3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2\": rpc error: code = NotFound desc = could not find container \"3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2\": container with ID starting with 3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2 not found: ID does not exist" Mar 20 07:33:08 crc kubenswrapper[5136]: I0320 07:33:08.410054 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" path="/var/lib/kubelet/pods/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9/volumes" Mar 20 07:33:09 crc kubenswrapper[5136]: I0320 07:33:09.396517 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:33:09 crc kubenswrapper[5136]: E0320 07:33:09.396886 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:33:21 crc kubenswrapper[5136]: I0320 07:33:21.396693 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:33:21 crc kubenswrapper[5136]: E0320 07:33:21.397920 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:33:32 crc kubenswrapper[5136]: I0320 07:33:32.397057 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:33:32 crc kubenswrapper[5136]: E0320 07:33:32.397840 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:33:46 crc kubenswrapper[5136]: I0320 07:33:46.397655 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:33:46 crc kubenswrapper[5136]: E0320 07:33:46.398363 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.169740 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566534-mfjdt"] Mar 20 07:34:00 crc kubenswrapper[5136]: E0320 07:34:00.174703 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="extract-content" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.174745 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="extract-content" Mar 20 07:34:00 crc kubenswrapper[5136]: E0320 07:34:00.174775 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="extract-utilities" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.174792 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="extract-utilities" Mar 20 07:34:00 crc kubenswrapper[5136]: E0320 07:34:00.174874 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="registry-server" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.174895 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="registry-server" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.175219 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="registry-server" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.176084 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.182523 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.182858 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.189112 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.204624 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566534-mfjdt"] Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.284917 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjd7\" (UniqueName: \"kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7\") pod \"auto-csr-approver-29566534-mfjdt\" (UID: \"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6\") " pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.386020 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjd7\" (UniqueName: \"kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7\") pod \"auto-csr-approver-29566534-mfjdt\" (UID: \"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6\") " pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.407850 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjd7\" (UniqueName: \"kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7\") pod \"auto-csr-approver-29566534-mfjdt\" (UID: \"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6\") " pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.511370 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.995361 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566534-mfjdt"] Mar 20 07:34:01 crc kubenswrapper[5136]: W0320 07:34:01.002228 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5d24cb7_5c49_44f0_b18f_a09604ee8bb6.slice/crio-1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b WatchSource:0}: Error finding container 1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b: Status 404 returned error can't find the container with id 1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b Mar 20 07:34:01 crc kubenswrapper[5136]: I0320 07:34:01.397386 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:34:01 crc kubenswrapper[5136]: E0320 07:34:01.397800 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:34:01 crc kubenswrapper[5136]: I0320 07:34:01.459652 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" event={"ID":"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6","Type":"ContainerStarted","Data":"1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b"} Mar 20 07:34:02 crc kubenswrapper[5136]: I0320 07:34:02.468803 5136 generic.go:334] "Generic (PLEG): container finished" podID="e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" containerID="8642a4da6cd6ed0a2fb9cab568877c9f7b33cdf51b38940824b385ba7da2a860" exitCode=0 Mar 20 07:34:02 crc kubenswrapper[5136]: I0320 07:34:02.468938 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" event={"ID":"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6","Type":"ContainerDied","Data":"8642a4da6cd6ed0a2fb9cab568877c9f7b33cdf51b38940824b385ba7da2a860"} Mar 20 07:34:03 crc kubenswrapper[5136]: I0320 07:34:03.754518 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:03 crc kubenswrapper[5136]: I0320 07:34:03.832128 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjjd7\" (UniqueName: \"kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7\") pod \"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6\" (UID: \"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6\") " Mar 20 07:34:03 crc kubenswrapper[5136]: I0320 07:34:03.838310 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7" (OuterVolumeSpecName: "kube-api-access-hjjd7") pod "e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" (UID: "e5d24cb7-5c49-44f0-b18f-a09604ee8bb6"). InnerVolumeSpecName "kube-api-access-hjjd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:34:03 crc kubenswrapper[5136]: I0320 07:34:03.934303 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjjd7\" (UniqueName: \"kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7\") on node \"crc\" DevicePath \"\"" Mar 20 07:34:04 crc kubenswrapper[5136]: I0320 07:34:04.483887 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" event={"ID":"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6","Type":"ContainerDied","Data":"1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b"} Mar 20 07:34:04 crc kubenswrapper[5136]: I0320 07:34:04.484335 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b" Mar 20 07:34:04 crc kubenswrapper[5136]: I0320 07:34:04.484048 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:04 crc kubenswrapper[5136]: I0320 07:34:04.837531 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566528-p9gfh"] Mar 20 07:34:04 crc kubenswrapper[5136]: I0320 07:34:04.847648 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566528-p9gfh"] Mar 20 07:34:06 crc kubenswrapper[5136]: I0320 07:34:06.415709 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0663cd7c-704c-4495-8271-f55538649003" path="/var/lib/kubelet/pods/0663cd7c-704c-4495-8271-f55538649003/volumes" Mar 20 07:34:14 crc kubenswrapper[5136]: I0320 07:34:14.397440 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:34:14 crc kubenswrapper[5136]: E0320 07:34:14.400396 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:34:25 crc kubenswrapper[5136]: I0320 07:34:25.405737 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:34:25 crc kubenswrapper[5136]: E0320 07:34:25.406968 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:34:33 crc kubenswrapper[5136]: I0320 07:34:33.914313 5136 scope.go:117] "RemoveContainer" containerID="8e7514aba4ea3d84ec9496fc84994ced79208352205e65a08cbf2bd32660e7b5" Mar 20 07:34:37 crc kubenswrapper[5136]: I0320 07:34:37.397374 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:34:37 crc kubenswrapper[5136]: E0320 07:34:37.398208 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:34:51 crc kubenswrapper[5136]: I0320 07:34:51.397642 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:34:51 crc kubenswrapper[5136]: E0320 07:34:51.398669 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:35:06 crc kubenswrapper[5136]: I0320 07:35:06.396473 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:35:06 crc kubenswrapper[5136]: E0320 07:35:06.397407 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:35:21 crc kubenswrapper[5136]: I0320 07:35:21.396959 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:35:21 crc kubenswrapper[5136]: E0320 07:35:21.397586 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.347386 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:31 crc kubenswrapper[5136]: E0320 07:35:31.347936 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" containerName="oc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.347971 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" containerName="oc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.348130 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" containerName="oc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.349281 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.364434 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.420788 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fqf2\" (UniqueName: \"kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.420941 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.420984 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.522218 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fqf2\" (UniqueName: \"kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.522354 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.522383 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.523341 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.523758 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.540548 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fqf2\" (UniqueName: \"kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.668485 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:32 crc kubenswrapper[5136]: I0320 07:35:32.081627 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:32 crc kubenswrapper[5136]: I0320 07:35:32.252229 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerStarted","Data":"e1aa85e32df54ec39eacb2972d512c55472988780e7fa2841ba88678618668a0"} Mar 20 07:35:33 crc kubenswrapper[5136]: I0320 07:35:33.260940 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerID="0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f" exitCode=0 Mar 20 07:35:33 crc kubenswrapper[5136]: I0320 07:35:33.261042 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerDied","Data":"0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f"} Mar 20 07:35:34 crc kubenswrapper[5136]: I0320 07:35:34.270365 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerStarted","Data":"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f"} Mar 20 07:35:35 crc kubenswrapper[5136]: I0320 07:35:35.278620 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerID="79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f" exitCode=0 Mar 20 07:35:35 crc kubenswrapper[5136]: I0320 07:35:35.278686 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerDied","Data":"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f"} Mar 20 07:35:35 crc kubenswrapper[5136]: I0320 07:35:35.396643 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:35:35 crc kubenswrapper[5136]: E0320 07:35:35.396940 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:35:36 crc kubenswrapper[5136]: I0320 07:35:36.290527 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerStarted","Data":"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9"} Mar 20 07:35:36 crc kubenswrapper[5136]: I0320 07:35:36.318584 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-28pvc" podStartSLOduration=2.8876636270000002 podStartE2EDuration="5.318545863s" podCreationTimestamp="2026-03-20 07:35:31 +0000 UTC" firstStartedPulling="2026-03-20 07:35:33.263774358 +0000 UTC m=+2765.523085519" lastFinishedPulling="2026-03-20 07:35:35.694656604 +0000 UTC m=+2767.953967755" observedRunningTime="2026-03-20 07:35:36.313605126 +0000 UTC m=+2768.572916267" watchObservedRunningTime="2026-03-20 07:35:36.318545863 +0000 UTC m=+2768.577857044" Mar 20 07:35:41 crc kubenswrapper[5136]: I0320 07:35:41.681643 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:41 crc kubenswrapper[5136]: I0320 07:35:41.682141 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:42 crc kubenswrapper[5136]: I0320 07:35:42.732800 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-28pvc" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="registry-server" probeResult="failure" output=< Mar 20 07:35:42 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:35:42 crc kubenswrapper[5136]: > Mar 20 07:35:46 crc kubenswrapper[5136]: I0320 07:35:46.397269 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:35:46 crc kubenswrapper[5136]: E0320 07:35:46.398027 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:35:51 crc kubenswrapper[5136]: I0320 07:35:51.741766 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:51 crc kubenswrapper[5136]: I0320 07:35:51.827926 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:51 crc kubenswrapper[5136]: I0320 07:35:51.993913 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.456476 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-28pvc" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="registry-server" containerID="cri-o://860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9" gracePeriod=2 Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.809312 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.993327 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities\") pod \"c7c927c5-116e-433d-b782-51792c8a0ae3\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.993389 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fqf2\" (UniqueName: \"kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2\") pod \"c7c927c5-116e-433d-b782-51792c8a0ae3\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.993476 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content\") pod \"c7c927c5-116e-433d-b782-51792c8a0ae3\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.994729 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities" (OuterVolumeSpecName: "utilities") pod "c7c927c5-116e-433d-b782-51792c8a0ae3" (UID: "c7c927c5-116e-433d-b782-51792c8a0ae3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.002625 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2" (OuterVolumeSpecName: "kube-api-access-5fqf2") pod "c7c927c5-116e-433d-b782-51792c8a0ae3" (UID: "c7c927c5-116e-433d-b782-51792c8a0ae3"). InnerVolumeSpecName "kube-api-access-5fqf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.094907 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fqf2\" (UniqueName: \"kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2\") on node \"crc\" DevicePath \"\"" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.094950 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.161950 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7c927c5-116e-433d-b782-51792c8a0ae3" (UID: "c7c927c5-116e-433d-b782-51792c8a0ae3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.196749 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.467312 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerID="860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9" exitCode=0 Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.467405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerDied","Data":"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9"} Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.467661 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerDied","Data":"e1aa85e32df54ec39eacb2972d512c55472988780e7fa2841ba88678618668a0"} Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.467443 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.467683 5136 scope.go:117] "RemoveContainer" containerID="860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.492333 5136 scope.go:117] "RemoveContainer" containerID="79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.498155 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.502197 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.520271 5136 scope.go:117] "RemoveContainer" containerID="0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.554410 5136 scope.go:117] "RemoveContainer" containerID="860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9" Mar 20 07:35:54 crc kubenswrapper[5136]: E0320 07:35:54.554741 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9\": container with ID starting with 860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9 not found: ID does not exist" containerID="860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.554772 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9"} err="failed to get container status \"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9\": rpc error: code = NotFound desc = could not find container \"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9\": container with ID starting with 860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9 not found: ID does not exist" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.554791 5136 scope.go:117] "RemoveContainer" containerID="79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f" Mar 20 07:35:54 crc kubenswrapper[5136]: E0320 07:35:54.555421 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f\": container with ID starting with 79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f not found: ID does not exist" containerID="79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.555450 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f"} err="failed to get container status \"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f\": rpc error: code = NotFound desc = could not find container \"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f\": container with ID starting with 79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f not found: ID does not exist" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.555464 5136 scope.go:117] "RemoveContainer" containerID="0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f" Mar 20 07:35:54 crc kubenswrapper[5136]: E0320 07:35:54.555709 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f\": container with ID starting with 0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f not found: ID does not exist" containerID="0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.555730 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f"} err="failed to get container status \"0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f\": rpc error: code = NotFound desc = could not find container \"0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f\": container with ID starting with 0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f not found: ID does not exist" Mar 20 07:35:56 crc kubenswrapper[5136]: I0320 07:35:56.417731 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" path="/var/lib/kubelet/pods/c7c927c5-116e-433d-b782-51792c8a0ae3/volumes" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.144235 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566536-gdck4"] Mar 20 07:36:00 crc kubenswrapper[5136]: E0320 07:36:00.144836 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="extract-utilities" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.144849 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="extract-utilities" Mar 20 07:36:00 crc kubenswrapper[5136]: E0320 07:36:00.144874 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="registry-server" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.144880 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="registry-server" Mar 20 07:36:00 crc kubenswrapper[5136]: E0320 07:36:00.144890 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="extract-content" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.144896 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="extract-content" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.145010 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="registry-server" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.145460 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.149857 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.150568 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.150742 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.158741 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566536-gdck4"] Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.188623 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbw45\" (UniqueName: \"kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45\") pod \"auto-csr-approver-29566536-gdck4\" (UID: \"52f90699-ed0d-4f94-ac8f-0710a3df7d1d\") " pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.289298 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbw45\" (UniqueName: \"kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45\") pod \"auto-csr-approver-29566536-gdck4\" (UID: \"52f90699-ed0d-4f94-ac8f-0710a3df7d1d\") " pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.310248 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbw45\" (UniqueName: \"kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45\") pod \"auto-csr-approver-29566536-gdck4\" (UID: \"52f90699-ed0d-4f94-ac8f-0710a3df7d1d\") " pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.477216 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.733376 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566536-gdck4"] Mar 20 07:36:01 crc kubenswrapper[5136]: I0320 07:36:01.396941 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:36:01 crc kubenswrapper[5136]: E0320 07:36:01.397334 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:36:01 crc kubenswrapper[5136]: I0320 07:36:01.530878 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566536-gdck4" event={"ID":"52f90699-ed0d-4f94-ac8f-0710a3df7d1d","Type":"ContainerStarted","Data":"90713c50ab758e3ee9df428264a1212b05faa55593554495076cc4b56e2cc6f7"} Mar 20 07:36:02 crc kubenswrapper[5136]: I0320 07:36:02.541136 5136 generic.go:334] "Generic (PLEG): container finished" podID="52f90699-ed0d-4f94-ac8f-0710a3df7d1d" containerID="35330414dc071593a3b58b60db6f11d64d8df670930fd1d800151dee73ac1250" exitCode=0 Mar 20 07:36:02 crc kubenswrapper[5136]: I0320 07:36:02.541200 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566536-gdck4" event={"ID":"52f90699-ed0d-4f94-ac8f-0710a3df7d1d","Type":"ContainerDied","Data":"35330414dc071593a3b58b60db6f11d64d8df670930fd1d800151dee73ac1250"} Mar 20 07:36:03 crc kubenswrapper[5136]: I0320 07:36:03.824609 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:03 crc kubenswrapper[5136]: I0320 07:36:03.842738 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbw45\" (UniqueName: \"kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45\") pod \"52f90699-ed0d-4f94-ac8f-0710a3df7d1d\" (UID: \"52f90699-ed0d-4f94-ac8f-0710a3df7d1d\") " Mar 20 07:36:03 crc kubenswrapper[5136]: I0320 07:36:03.849006 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45" (OuterVolumeSpecName: "kube-api-access-fbw45") pod "52f90699-ed0d-4f94-ac8f-0710a3df7d1d" (UID: "52f90699-ed0d-4f94-ac8f-0710a3df7d1d"). InnerVolumeSpecName "kube-api-access-fbw45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:36:03 crc kubenswrapper[5136]: I0320 07:36:03.944611 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbw45\" (UniqueName: \"kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45\") on node \"crc\" DevicePath \"\"" Mar 20 07:36:04 crc kubenswrapper[5136]: I0320 07:36:04.563942 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566536-gdck4" event={"ID":"52f90699-ed0d-4f94-ac8f-0710a3df7d1d","Type":"ContainerDied","Data":"90713c50ab758e3ee9df428264a1212b05faa55593554495076cc4b56e2cc6f7"} Mar 20 07:36:04 crc kubenswrapper[5136]: I0320 07:36:04.563995 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90713c50ab758e3ee9df428264a1212b05faa55593554495076cc4b56e2cc6f7" Mar 20 07:36:04 crc kubenswrapper[5136]: I0320 07:36:04.564071 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:04 crc kubenswrapper[5136]: I0320 07:36:04.925694 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566530-wht58"] Mar 20 07:36:04 crc kubenswrapper[5136]: I0320 07:36:04.934140 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566530-wht58"] Mar 20 07:36:06 crc kubenswrapper[5136]: I0320 07:36:06.409603 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d0faa53-8471-40c0-a2ed-ef66d5b66e72" path="/var/lib/kubelet/pods/2d0faa53-8471-40c0-a2ed-ef66d5b66e72/volumes" Mar 20 07:36:13 crc kubenswrapper[5136]: I0320 07:36:13.396971 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:36:13 crc kubenswrapper[5136]: E0320 07:36:13.398647 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:36:26 crc kubenswrapper[5136]: I0320 07:36:26.396445 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:36:26 crc kubenswrapper[5136]: I0320 07:36:26.756642 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322"} Mar 20 07:36:34 crc kubenswrapper[5136]: I0320 07:36:34.051516 5136 scope.go:117] "RemoveContainer" containerID="a0171e6422989bbfb70e3a76ced8595e932c612b43527acfa116a80d4b912265" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.149518 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566538-d9gdj"] Mar 20 07:38:00 crc kubenswrapper[5136]: E0320 07:38:00.150532 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f90699-ed0d-4f94-ac8f-0710a3df7d1d" containerName="oc" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.150552 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f90699-ed0d-4f94-ac8f-0710a3df7d1d" containerName="oc" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.150796 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f90699-ed0d-4f94-ac8f-0710a3df7d1d" containerName="oc" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.151440 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.156020 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.156237 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.156377 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.172665 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566538-d9gdj"] Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.225680 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgm92\" (UniqueName: \"kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92\") pod \"auto-csr-approver-29566538-d9gdj\" (UID: \"92f22f97-0d01-4c04-8d7c-8f0ec81c1559\") " pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.326592 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgm92\" (UniqueName: \"kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92\") pod \"auto-csr-approver-29566538-d9gdj\" (UID: \"92f22f97-0d01-4c04-8d7c-8f0ec81c1559\") " pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.344730 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgm92\" (UniqueName: \"kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92\") pod \"auto-csr-approver-29566538-d9gdj\" (UID: \"92f22f97-0d01-4c04-8d7c-8f0ec81c1559\") " pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.474919 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.967553 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.970083 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566538-d9gdj"] Mar 20 07:38:01 crc kubenswrapper[5136]: I0320 07:38:01.566344 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" event={"ID":"92f22f97-0d01-4c04-8d7c-8f0ec81c1559","Type":"ContainerStarted","Data":"75f814047295dea1a3e10b2efbf1c95fc9c334163f82f2d34bf9c79441a48819"} Mar 20 07:38:02 crc kubenswrapper[5136]: I0320 07:38:02.576022 5136 generic.go:334] "Generic (PLEG): container finished" podID="92f22f97-0d01-4c04-8d7c-8f0ec81c1559" containerID="085754035e9717018bfa13a75fa5176cf36ff83294b4916d7b6b6031d31b5c22" exitCode=0 Mar 20 07:38:02 crc kubenswrapper[5136]: I0320 07:38:02.576143 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" event={"ID":"92f22f97-0d01-4c04-8d7c-8f0ec81c1559","Type":"ContainerDied","Data":"085754035e9717018bfa13a75fa5176cf36ff83294b4916d7b6b6031d31b5c22"} Mar 20 07:38:03 crc kubenswrapper[5136]: I0320 07:38:03.885047 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:03 crc kubenswrapper[5136]: I0320 07:38:03.979566 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgm92\" (UniqueName: \"kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92\") pod \"92f22f97-0d01-4c04-8d7c-8f0ec81c1559\" (UID: \"92f22f97-0d01-4c04-8d7c-8f0ec81c1559\") " Mar 20 07:38:03 crc kubenswrapper[5136]: I0320 07:38:03.990575 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92" (OuterVolumeSpecName: "kube-api-access-rgm92") pod "92f22f97-0d01-4c04-8d7c-8f0ec81c1559" (UID: "92f22f97-0d01-4c04-8d7c-8f0ec81c1559"). InnerVolumeSpecName "kube-api-access-rgm92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.080799 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgm92\" (UniqueName: \"kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92\") on node \"crc\" DevicePath \"\"" Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.594993 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" event={"ID":"92f22f97-0d01-4c04-8d7c-8f0ec81c1559","Type":"ContainerDied","Data":"75f814047295dea1a3e10b2efbf1c95fc9c334163f82f2d34bf9c79441a48819"} Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.595596 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f814047295dea1a3e10b2efbf1c95fc9c334163f82f2d34bf9c79441a48819" Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.595109 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.991806 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566532-twtd9"] Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.998904 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566532-twtd9"] Mar 20 07:38:06 crc kubenswrapper[5136]: I0320 07:38:06.409751 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c27187-55d8-4db4-9cae-d77617300a14" path="/var/lib/kubelet/pods/e4c27187-55d8-4db4-9cae-d77617300a14/volumes" Mar 20 07:38:30 crc kubenswrapper[5136]: I0320 07:38:30.963110 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:30 crc kubenswrapper[5136]: E0320 07:38:30.964296 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f22f97-0d01-4c04-8d7c-8f0ec81c1559" containerName="oc" Mar 20 07:38:30 crc kubenswrapper[5136]: I0320 07:38:30.964324 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f22f97-0d01-4c04-8d7c-8f0ec81c1559" containerName="oc" Mar 20 07:38:30 crc kubenswrapper[5136]: I0320 07:38:30.964584 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f22f97-0d01-4c04-8d7c-8f0ec81c1559" containerName="oc" Mar 20 07:38:30 crc kubenswrapper[5136]: I0320 07:38:30.966516 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:30 crc kubenswrapper[5136]: I0320 07:38:30.971778 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.025928 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.025984 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7g92\" (UniqueName: \"kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.026059 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.127572 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.127678 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.127698 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7g92\" (UniqueName: \"kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.128514 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.128535 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.152918 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7g92\" (UniqueName: \"kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.299892 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.786196 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.843433 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerStarted","Data":"695de2aa0a9f443217d9e1516f6c50bddda8614b8b750d85a0fcb48982677f15"} Mar 20 07:38:32 crc kubenswrapper[5136]: I0320 07:38:32.854864 5136 generic.go:334] "Generic (PLEG): container finished" podID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerID="8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9" exitCode=0 Mar 20 07:38:32 crc kubenswrapper[5136]: I0320 07:38:32.854949 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerDied","Data":"8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9"} Mar 20 07:38:33 crc kubenswrapper[5136]: I0320 07:38:33.868527 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerStarted","Data":"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5"} Mar 20 07:38:34 crc kubenswrapper[5136]: I0320 07:38:34.152960 5136 scope.go:117] "RemoveContainer" containerID="5c9dbddef3617b2a1a9f29b6615bed2e74b730bf03006a402fac0528653fa989" Mar 20 07:38:34 crc kubenswrapper[5136]: I0320 07:38:34.886745 5136 generic.go:334] "Generic (PLEG): container finished" podID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerID="5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5" exitCode=0 Mar 20 07:38:34 crc kubenswrapper[5136]: I0320 07:38:34.886871 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerDied","Data":"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5"} Mar 20 07:38:35 crc kubenswrapper[5136]: I0320 07:38:35.898643 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerStarted","Data":"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052"} Mar 20 07:38:35 crc kubenswrapper[5136]: I0320 07:38:35.942937 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rnbzz" podStartSLOduration=3.157809826 podStartE2EDuration="5.942913283s" podCreationTimestamp="2026-03-20 07:38:30 +0000 UTC" firstStartedPulling="2026-03-20 07:38:32.856389079 +0000 UTC m=+2945.115700240" lastFinishedPulling="2026-03-20 07:38:35.641492516 +0000 UTC m=+2947.900803697" observedRunningTime="2026-03-20 07:38:35.930483795 +0000 UTC m=+2948.189794986" watchObservedRunningTime="2026-03-20 07:38:35.942913283 +0000 UTC m=+2948.202224474" Mar 20 07:38:41 crc kubenswrapper[5136]: I0320 07:38:41.301205 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:41 crc kubenswrapper[5136]: I0320 07:38:41.302052 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:41 crc kubenswrapper[5136]: I0320 07:38:41.344403 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:42 crc kubenswrapper[5136]: I0320 07:38:42.030555 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:42 crc kubenswrapper[5136]: I0320 07:38:42.093270 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:43 crc kubenswrapper[5136]: I0320 07:38:43.994706 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rnbzz" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="registry-server" containerID="cri-o://4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052" gracePeriod=2 Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.528349 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.624015 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities\") pod \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.624107 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content\") pod \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.624168 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7g92\" (UniqueName: \"kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92\") pod \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.626075 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities" (OuterVolumeSpecName: "utilities") pod "5e0cc814-f9aa-4a89-ba33-a9729f19d76e" (UID: "5e0cc814-f9aa-4a89-ba33-a9729f19d76e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.631005 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92" (OuterVolumeSpecName: "kube-api-access-d7g92") pod "5e0cc814-f9aa-4a89-ba33-a9729f19d76e" (UID: "5e0cc814-f9aa-4a89-ba33-a9729f19d76e"). InnerVolumeSpecName "kube-api-access-d7g92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.661680 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e0cc814-f9aa-4a89-ba33-a9729f19d76e" (UID: "5e0cc814-f9aa-4a89-ba33-a9729f19d76e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.725110 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.725149 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.725159 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7g92\" (UniqueName: \"kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92\") on node \"crc\" DevicePath \"\"" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.020427 5136 generic.go:334] "Generic (PLEG): container finished" podID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerID="4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052" exitCode=0 Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.020563 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.020547 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerDied","Data":"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052"} Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.021347 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerDied","Data":"695de2aa0a9f443217d9e1516f6c50bddda8614b8b750d85a0fcb48982677f15"} Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.021412 5136 scope.go:117] "RemoveContainer" containerID="4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.077907 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.083946 5136 scope.go:117] "RemoveContainer" containerID="5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.091760 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.121081 5136 scope.go:117] "RemoveContainer" containerID="8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.150291 5136 scope.go:117] "RemoveContainer" containerID="4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052" Mar 20 07:38:45 crc kubenswrapper[5136]: E0320 07:38:45.151002 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052\": container with ID starting with 4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052 not found: ID does not exist" containerID="4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.151041 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052"} err="failed to get container status \"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052\": rpc error: code = NotFound desc = could not find container \"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052\": container with ID starting with 4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052 not found: ID does not exist" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.151084 5136 scope.go:117] "RemoveContainer" containerID="5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5" Mar 20 07:38:45 crc kubenswrapper[5136]: E0320 07:38:45.151562 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5\": container with ID starting with 5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5 not found: ID does not exist" containerID="5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.151604 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5"} err="failed to get container status \"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5\": rpc error: code = NotFound desc = could not find container \"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5\": container with ID starting with 5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5 not found: ID does not exist" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.151630 5136 scope.go:117] "RemoveContainer" containerID="8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9" Mar 20 07:38:45 crc kubenswrapper[5136]: E0320 07:38:45.152087 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9\": container with ID starting with 8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9 not found: ID does not exist" containerID="8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.152136 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9"} err="failed to get container status \"8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9\": rpc error: code = NotFound desc = could not find container \"8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9\": container with ID starting with 8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9 not found: ID does not exist" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.822486 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.822551 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:38:46 crc kubenswrapper[5136]: I0320 07:38:46.407136 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" path="/var/lib/kubelet/pods/5e0cc814-f9aa-4a89-ba33-a9729f19d76e/volumes" Mar 20 07:39:15 crc kubenswrapper[5136]: I0320 07:39:15.822126 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:39:15 crc kubenswrapper[5136]: I0320 07:39:15.822731 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:39:45 crc kubenswrapper[5136]: I0320 07:39:45.822107 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:39:45 crc kubenswrapper[5136]: I0320 07:39:45.822756 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:39:45 crc kubenswrapper[5136]: I0320 07:39:45.822848 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:39:45 crc kubenswrapper[5136]: I0320 07:39:45.823575 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:39:45 crc kubenswrapper[5136]: I0320 07:39:45.823667 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322" gracePeriod=600 Mar 20 07:39:46 crc kubenswrapper[5136]: I0320 07:39:46.515794 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322" exitCode=0 Mar 20 07:39:46 crc kubenswrapper[5136]: I0320 07:39:46.515836 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322"} Mar 20 07:39:46 crc kubenswrapper[5136]: I0320 07:39:46.516158 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17"} Mar 20 07:39:46 crc kubenswrapper[5136]: I0320 07:39:46.516180 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.152144 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566540-q2mqq"] Mar 20 07:40:00 crc kubenswrapper[5136]: E0320 07:40:00.153228 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="extract-content" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.153243 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="extract-content" Mar 20 07:40:00 crc kubenswrapper[5136]: E0320 07:40:00.153256 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="registry-server" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.153262 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="registry-server" Mar 20 07:40:00 crc kubenswrapper[5136]: E0320 07:40:00.153269 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="extract-utilities" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.153276 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="extract-utilities" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.153442 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="registry-server" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.153860 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.159843 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.159985 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.160059 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.167592 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566540-q2mqq"] Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.334207 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tflrp\" (UniqueName: \"kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp\") pod \"auto-csr-approver-29566540-q2mqq\" (UID: \"13ab4686-525c-4931-93d9-5b71ec6644ee\") " pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.436509 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tflrp\" (UniqueName: \"kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp\") pod \"auto-csr-approver-29566540-q2mqq\" (UID: \"13ab4686-525c-4931-93d9-5b71ec6644ee\") " pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.459916 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tflrp\" (UniqueName: \"kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp\") pod \"auto-csr-approver-29566540-q2mqq\" (UID: \"13ab4686-525c-4931-93d9-5b71ec6644ee\") " pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.490924 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.932410 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566540-q2mqq"] Mar 20 07:40:01 crc kubenswrapper[5136]: I0320 07:40:01.637387 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" event={"ID":"13ab4686-525c-4931-93d9-5b71ec6644ee","Type":"ContainerStarted","Data":"f9695eeb9c813c7c883a386e4570bf2ffddc4a88a95f2ae1f421545adb211333"} Mar 20 07:40:02 crc kubenswrapper[5136]: I0320 07:40:02.648168 5136 generic.go:334] "Generic (PLEG): container finished" podID="13ab4686-525c-4931-93d9-5b71ec6644ee" containerID="252260f3a58979042bf8b21321cd53a2147f00019a6012d4dfcab45147ceb6a9" exitCode=0 Mar 20 07:40:02 crc kubenswrapper[5136]: I0320 07:40:02.648268 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" event={"ID":"13ab4686-525c-4931-93d9-5b71ec6644ee","Type":"ContainerDied","Data":"252260f3a58979042bf8b21321cd53a2147f00019a6012d4dfcab45147ceb6a9"} Mar 20 07:40:03 crc kubenswrapper[5136]: I0320 07:40:03.900140 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:03 crc kubenswrapper[5136]: I0320 07:40:03.990466 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tflrp\" (UniqueName: \"kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp\") pod \"13ab4686-525c-4931-93d9-5b71ec6644ee\" (UID: \"13ab4686-525c-4931-93d9-5b71ec6644ee\") " Mar 20 07:40:03 crc kubenswrapper[5136]: I0320 07:40:03.999166 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp" (OuterVolumeSpecName: "kube-api-access-tflrp") pod "13ab4686-525c-4931-93d9-5b71ec6644ee" (UID: "13ab4686-525c-4931-93d9-5b71ec6644ee"). InnerVolumeSpecName "kube-api-access-tflrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.091758 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tflrp\" (UniqueName: \"kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp\") on node \"crc\" DevicePath \"\"" Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.663178 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" event={"ID":"13ab4686-525c-4931-93d9-5b71ec6644ee","Type":"ContainerDied","Data":"f9695eeb9c813c7c883a386e4570bf2ffddc4a88a95f2ae1f421545adb211333"} Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.663445 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9695eeb9c813c7c883a386e4570bf2ffddc4a88a95f2ae1f421545adb211333" Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.663238 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.960340 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566534-mfjdt"] Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.965488 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566534-mfjdt"] Mar 20 07:40:06 crc kubenswrapper[5136]: I0320 07:40:06.412664 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" path="/var/lib/kubelet/pods/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6/volumes" Mar 20 07:40:34 crc kubenswrapper[5136]: I0320 07:40:34.282149 5136 scope.go:117] "RemoveContainer" containerID="8642a4da6cd6ed0a2fb9cab568877c9f7b33cdf51b38940824b385ba7da2a860" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.157476 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566542-pds9m"] Mar 20 07:42:00 crc kubenswrapper[5136]: E0320 07:42:00.158524 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ab4686-525c-4931-93d9-5b71ec6644ee" containerName="oc" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.158540 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ab4686-525c-4931-93d9-5b71ec6644ee" containerName="oc" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.158727 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ab4686-525c-4931-93d9-5b71ec6644ee" containerName="oc" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.159235 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.163535 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.165384 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.166110 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.172693 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566542-pds9m"] Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.227401 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcxz4\" (UniqueName: \"kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4\") pod \"auto-csr-approver-29566542-pds9m\" (UID: \"17d864d8-8238-4e66-b9ac-d03d95596254\") " pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.328893 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcxz4\" (UniqueName: \"kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4\") pod \"auto-csr-approver-29566542-pds9m\" (UID: \"17d864d8-8238-4e66-b9ac-d03d95596254\") " pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.352690 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcxz4\" (UniqueName: \"kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4\") pod \"auto-csr-approver-29566542-pds9m\" (UID: \"17d864d8-8238-4e66-b9ac-d03d95596254\") " pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.487562 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.881891 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566542-pds9m"] Mar 20 07:42:01 crc kubenswrapper[5136]: I0320 07:42:01.403132 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566542-pds9m" event={"ID":"17d864d8-8238-4e66-b9ac-d03d95596254","Type":"ContainerStarted","Data":"d4dd928fd390e38d6bae8bb7b8e974d5de68a461cdcd885e35070871304c16e4"} Mar 20 07:42:02 crc kubenswrapper[5136]: I0320 07:42:02.408862 5136 generic.go:334] "Generic (PLEG): container finished" podID="17d864d8-8238-4e66-b9ac-d03d95596254" containerID="7d44df1c73e9c1d9108526abbe2353b5337e03d920bac4de2652a37d15133fc6" exitCode=0 Mar 20 07:42:02 crc kubenswrapper[5136]: I0320 07:42:02.408914 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566542-pds9m" event={"ID":"17d864d8-8238-4e66-b9ac-d03d95596254","Type":"ContainerDied","Data":"7d44df1c73e9c1d9108526abbe2353b5337e03d920bac4de2652a37d15133fc6"} Mar 20 07:42:03 crc kubenswrapper[5136]: I0320 07:42:03.751177 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:03 crc kubenswrapper[5136]: I0320 07:42:03.892669 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcxz4\" (UniqueName: \"kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4\") pod \"17d864d8-8238-4e66-b9ac-d03d95596254\" (UID: \"17d864d8-8238-4e66-b9ac-d03d95596254\") " Mar 20 07:42:03 crc kubenswrapper[5136]: I0320 07:42:03.898701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4" (OuterVolumeSpecName: "kube-api-access-gcxz4") pod "17d864d8-8238-4e66-b9ac-d03d95596254" (UID: "17d864d8-8238-4e66-b9ac-d03d95596254"). InnerVolumeSpecName "kube-api-access-gcxz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:42:03 crc kubenswrapper[5136]: I0320 07:42:03.994400 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcxz4\" (UniqueName: \"kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4\") on node \"crc\" DevicePath \"\"" Mar 20 07:42:04 crc kubenswrapper[5136]: I0320 07:42:04.432299 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566542-pds9m" event={"ID":"17d864d8-8238-4e66-b9ac-d03d95596254","Type":"ContainerDied","Data":"d4dd928fd390e38d6bae8bb7b8e974d5de68a461cdcd885e35070871304c16e4"} Mar 20 07:42:04 crc kubenswrapper[5136]: I0320 07:42:04.432896 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4dd928fd390e38d6bae8bb7b8e974d5de68a461cdcd885e35070871304c16e4" Mar 20 07:42:04 crc kubenswrapper[5136]: I0320 07:42:04.432996 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:04 crc kubenswrapper[5136]: I0320 07:42:04.872218 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566536-gdck4"] Mar 20 07:42:04 crc kubenswrapper[5136]: I0320 07:42:04.884559 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566536-gdck4"] Mar 20 07:42:06 crc kubenswrapper[5136]: I0320 07:42:06.411135 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f90699-ed0d-4f94-ac8f-0710a3df7d1d" path="/var/lib/kubelet/pods/52f90699-ed0d-4f94-ac8f-0710a3df7d1d/volumes" Mar 20 07:42:15 crc kubenswrapper[5136]: I0320 07:42:15.822619 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:42:15 crc kubenswrapper[5136]: I0320 07:42:15.823305 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:42:34 crc kubenswrapper[5136]: I0320 07:42:34.379204 5136 scope.go:117] "RemoveContainer" containerID="35330414dc071593a3b58b60db6f11d64d8df670930fd1d800151dee73ac1250" Mar 20 07:42:45 crc kubenswrapper[5136]: I0320 07:42:45.822128 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:42:45 crc kubenswrapper[5136]: I0320 07:42:45.822708 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.734626 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:03 crc kubenswrapper[5136]: E0320 07:43:03.735294 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d864d8-8238-4e66-b9ac-d03d95596254" containerName="oc" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.735307 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d864d8-8238-4e66-b9ac-d03d95596254" containerName="oc" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.735445 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d864d8-8238-4e66-b9ac-d03d95596254" containerName="oc" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.736326 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.747153 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.857289 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.857462 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.857521 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwgv\" (UniqueName: \"kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.959223 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwgv\" (UniqueName: \"kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.959278 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.959354 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.959920 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.960093 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.988305 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwgv\" (UniqueName: \"kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.066529 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.608214 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.965657 5136 generic.go:334] "Generic (PLEG): container finished" podID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerID="2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786" exitCode=0 Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.965750 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerDied","Data":"2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786"} Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.966018 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerStarted","Data":"a803bd4eb38a46416dcf94445aab13c2f8df6b784f82441df8622ec1b09e2a61"} Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.967906 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:43:05 crc kubenswrapper[5136]: I0320 07:43:05.975802 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerStarted","Data":"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa"} Mar 20 07:43:06 crc kubenswrapper[5136]: I0320 07:43:06.990274 5136 generic.go:334] "Generic (PLEG): container finished" podID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerID="0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa" exitCode=0 Mar 20 07:43:06 crc kubenswrapper[5136]: I0320 07:43:06.990352 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerDied","Data":"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa"} Mar 20 07:43:08 crc kubenswrapper[5136]: I0320 07:43:08.000322 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerStarted","Data":"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a"} Mar 20 07:43:08 crc kubenswrapper[5136]: I0320 07:43:08.025197 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cn4tr" podStartSLOduration=2.322474844 podStartE2EDuration="5.025172527s" podCreationTimestamp="2026-03-20 07:43:03 +0000 UTC" firstStartedPulling="2026-03-20 07:43:04.967607151 +0000 UTC m=+3217.226918302" lastFinishedPulling="2026-03-20 07:43:07.670304834 +0000 UTC m=+3219.929615985" observedRunningTime="2026-03-20 07:43:08.017358136 +0000 UTC m=+3220.276669327" watchObservedRunningTime="2026-03-20 07:43:08.025172527 +0000 UTC m=+3220.284483688" Mar 20 07:43:14 crc kubenswrapper[5136]: I0320 07:43:14.068165 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:14 crc kubenswrapper[5136]: I0320 07:43:14.068929 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:14 crc kubenswrapper[5136]: I0320 07:43:14.120171 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.130312 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.175280 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.822119 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.822206 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.822277 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.823272 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.823398 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" gracePeriod=600 Mar 20 07:43:15 crc kubenswrapper[5136]: E0320 07:43:15.946014 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:43:16 crc kubenswrapper[5136]: I0320 07:43:16.059858 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" exitCode=0 Mar 20 07:43:16 crc kubenswrapper[5136]: I0320 07:43:16.059905 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17"} Mar 20 07:43:16 crc kubenswrapper[5136]: I0320 07:43:16.059990 5136 scope.go:117] "RemoveContainer" containerID="e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322" Mar 20 07:43:16 crc kubenswrapper[5136]: I0320 07:43:16.060844 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:43:16 crc kubenswrapper[5136]: E0320 07:43:16.061219 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.071512 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cn4tr" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="registry-server" containerID="cri-o://82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a" gracePeriod=2 Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.454510 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.586780 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwgv\" (UniqueName: \"kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv\") pod \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.586876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content\") pod \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.586918 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities\") pod \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.588067 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities" (OuterVolumeSpecName: "utilities") pod "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" (UID: "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.591338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv" (OuterVolumeSpecName: "kube-api-access-gfwgv") pod "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" (UID: "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff"). InnerVolumeSpecName "kube-api-access-gfwgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.637469 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" (UID: "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.689063 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfwgv\" (UniqueName: \"kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv\") on node \"crc\" DevicePath \"\"" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.689253 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.689336 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.084044 5136 generic.go:334] "Generic (PLEG): container finished" podID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerID="82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a" exitCode=0 Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.084099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerDied","Data":"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a"} Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.084153 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerDied","Data":"a803bd4eb38a46416dcf94445aab13c2f8df6b784f82441df8622ec1b09e2a61"} Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.084188 5136 scope.go:117] "RemoveContainer" containerID="82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.084188 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.115906 5136 scope.go:117] "RemoveContainer" containerID="0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.149523 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.159920 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.176503 5136 scope.go:117] "RemoveContainer" containerID="2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.205396 5136 scope.go:117] "RemoveContainer" containerID="82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a" Mar 20 07:43:18 crc kubenswrapper[5136]: E0320 07:43:18.206192 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a\": container with ID starting with 82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a not found: ID does not exist" containerID="82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.206261 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a"} err="failed to get container status \"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a\": rpc error: code = NotFound desc = could not find container \"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a\": container with ID starting with 82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a not found: ID does not exist" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.206311 5136 scope.go:117] "RemoveContainer" containerID="0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa" Mar 20 07:43:18 crc kubenswrapper[5136]: E0320 07:43:18.206788 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa\": container with ID starting with 0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa not found: ID does not exist" containerID="0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.206938 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa"} err="failed to get container status \"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa\": rpc error: code = NotFound desc = could not find container \"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa\": container with ID starting with 0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa not found: ID does not exist" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.207038 5136 scope.go:117] "RemoveContainer" containerID="2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786" Mar 20 07:43:18 crc kubenswrapper[5136]: E0320 07:43:18.207535 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786\": container with ID starting with 2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786 not found: ID does not exist" containerID="2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.207622 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786"} err="failed to get container status \"2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786\": rpc error: code = NotFound desc = could not find container \"2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786\": container with ID starting with 2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786 not found: ID does not exist" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.412673 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" path="/var/lib/kubelet/pods/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff/volumes" Mar 20 07:43:31 crc kubenswrapper[5136]: I0320 07:43:31.397778 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:43:31 crc kubenswrapper[5136]: E0320 07:43:31.398549 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:43:44 crc kubenswrapper[5136]: I0320 07:43:44.397798 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:43:44 crc kubenswrapper[5136]: E0320 07:43:44.400923 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:43:55 crc kubenswrapper[5136]: I0320 07:43:55.397250 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:43:55 crc kubenswrapper[5136]: E0320 07:43:55.397951 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.175929 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566544-2kjwj"] Mar 20 07:44:00 crc kubenswrapper[5136]: E0320 07:44:00.176568 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="registry-server" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.176593 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="registry-server" Mar 20 07:44:00 crc kubenswrapper[5136]: E0320 07:44:00.176609 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="extract-utilities" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.176617 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="extract-utilities" Mar 20 07:44:00 crc kubenswrapper[5136]: E0320 07:44:00.176628 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="extract-content" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.176637 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="extract-content" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.176778 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="registry-server" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.177349 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.183781 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.183943 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.184336 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.187652 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566544-2kjwj"] Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.360714 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56w4\" (UniqueName: \"kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4\") pod \"auto-csr-approver-29566544-2kjwj\" (UID: \"26c6802e-62e8-47ba-b964-fde9f92ca8ef\") " pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.462572 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k56w4\" (UniqueName: \"kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4\") pod \"auto-csr-approver-29566544-2kjwj\" (UID: \"26c6802e-62e8-47ba-b964-fde9f92ca8ef\") " pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.485996 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k56w4\" (UniqueName: \"kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4\") pod \"auto-csr-approver-29566544-2kjwj\" (UID: \"26c6802e-62e8-47ba-b964-fde9f92ca8ef\") " pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.496120 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.876736 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566544-2kjwj"] Mar 20 07:44:01 crc kubenswrapper[5136]: I0320 07:44:01.444204 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" event={"ID":"26c6802e-62e8-47ba-b964-fde9f92ca8ef","Type":"ContainerStarted","Data":"d5d5ee818bbda6f5db1f63e1b0ea3e0da7baf51b71f50bdcc585d3558777fa95"} Mar 20 07:44:02 crc kubenswrapper[5136]: I0320 07:44:02.461174 5136 generic.go:334] "Generic (PLEG): container finished" podID="26c6802e-62e8-47ba-b964-fde9f92ca8ef" containerID="340e29815927db9adaf364543d249649b4c4d562d5c4326419747f3242c8e07d" exitCode=0 Mar 20 07:44:02 crc kubenswrapper[5136]: I0320 07:44:02.461241 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" event={"ID":"26c6802e-62e8-47ba-b964-fde9f92ca8ef","Type":"ContainerDied","Data":"340e29815927db9adaf364543d249649b4c4d562d5c4326419747f3242c8e07d"} Mar 20 07:44:03 crc kubenswrapper[5136]: I0320 07:44:03.772693 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:03 crc kubenswrapper[5136]: I0320 07:44:03.921528 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k56w4\" (UniqueName: \"kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4\") pod \"26c6802e-62e8-47ba-b964-fde9f92ca8ef\" (UID: \"26c6802e-62e8-47ba-b964-fde9f92ca8ef\") " Mar 20 07:44:03 crc kubenswrapper[5136]: I0320 07:44:03.928082 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4" (OuterVolumeSpecName: "kube-api-access-k56w4") pod "26c6802e-62e8-47ba-b964-fde9f92ca8ef" (UID: "26c6802e-62e8-47ba-b964-fde9f92ca8ef"). InnerVolumeSpecName "kube-api-access-k56w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.023146 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k56w4\" (UniqueName: \"kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4\") on node \"crc\" DevicePath \"\"" Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.479054 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" event={"ID":"26c6802e-62e8-47ba-b964-fde9f92ca8ef","Type":"ContainerDied","Data":"d5d5ee818bbda6f5db1f63e1b0ea3e0da7baf51b71f50bdcc585d3558777fa95"} Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.479127 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d5ee818bbda6f5db1f63e1b0ea3e0da7baf51b71f50bdcc585d3558777fa95" Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.479164 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.833513 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566538-d9gdj"] Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.840067 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566538-d9gdj"] Mar 20 07:44:06 crc kubenswrapper[5136]: I0320 07:44:06.407434 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f22f97-0d01-4c04-8d7c-8f0ec81c1559" path="/var/lib/kubelet/pods/92f22f97-0d01-4c04-8d7c-8f0ec81c1559/volumes" Mar 20 07:44:07 crc kubenswrapper[5136]: I0320 07:44:07.397378 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:44:07 crc kubenswrapper[5136]: E0320 07:44:07.397686 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:44:21 crc kubenswrapper[5136]: I0320 07:44:21.397128 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:44:21 crc kubenswrapper[5136]: E0320 07:44:21.397579 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:44:34 crc kubenswrapper[5136]: I0320 07:44:34.501121 5136 scope.go:117] "RemoveContainer" containerID="085754035e9717018bfa13a75fa5176cf36ff83294b4916d7b6b6031d31b5c22" Mar 20 07:44:35 crc kubenswrapper[5136]: I0320 07:44:35.397981 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:44:35 crc kubenswrapper[5136]: E0320 07:44:35.398464 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:44:48 crc kubenswrapper[5136]: I0320 07:44:48.401125 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:44:48 crc kubenswrapper[5136]: E0320 07:44:48.402104 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:44:55 crc kubenswrapper[5136]: I0320 07:44:55.974828 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:44:55 crc kubenswrapper[5136]: E0320 07:44:55.975466 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6802e-62e8-47ba-b964-fde9f92ca8ef" containerName="oc" Mar 20 07:44:55 crc kubenswrapper[5136]: I0320 07:44:55.975481 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6802e-62e8-47ba-b964-fde9f92ca8ef" containerName="oc" Mar 20 07:44:55 crc kubenswrapper[5136]: I0320 07:44:55.975664 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6802e-62e8-47ba-b964-fde9f92ca8ef" containerName="oc" Mar 20 07:44:55 crc kubenswrapper[5136]: I0320 07:44:55.976952 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:55 crc kubenswrapper[5136]: I0320 07:44:55.992892 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.100103 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.100167 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.100224 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76lm9\" (UniqueName: \"kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.201026 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76lm9\" (UniqueName: \"kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.201120 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.201150 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.201681 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.201695 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.223264 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76lm9\" (UniqueName: \"kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.295340 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.755257 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.851759 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerStarted","Data":"b13ec984c4bc16d74f2087182c5cdff319920af93d300e004f263ca60bc836fb"} Mar 20 07:44:57 crc kubenswrapper[5136]: I0320 07:44:57.863220 5136 generic.go:334] "Generic (PLEG): container finished" podID="32051439-253c-4626-bc98-701985ff87cf" containerID="155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd" exitCode=0 Mar 20 07:44:57 crc kubenswrapper[5136]: I0320 07:44:57.863319 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerDied","Data":"155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd"} Mar 20 07:44:58 crc kubenswrapper[5136]: I0320 07:44:58.873988 5136 generic.go:334] "Generic (PLEG): container finished" podID="32051439-253c-4626-bc98-701985ff87cf" containerID="fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20" exitCode=0 Mar 20 07:44:58 crc kubenswrapper[5136]: I0320 07:44:58.874032 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerDied","Data":"fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20"} Mar 20 07:44:59 crc kubenswrapper[5136]: I0320 07:44:59.887443 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerStarted","Data":"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec"} Mar 20 07:44:59 crc kubenswrapper[5136]: I0320 07:44:59.910228 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mzr5h" podStartSLOduration=3.522531322 podStartE2EDuration="4.910196633s" podCreationTimestamp="2026-03-20 07:44:55 +0000 UTC" firstStartedPulling="2026-03-20 07:44:57.865215351 +0000 UTC m=+3330.124526502" lastFinishedPulling="2026-03-20 07:44:59.252880662 +0000 UTC m=+3331.512191813" observedRunningTime="2026-03-20 07:44:59.906105636 +0000 UTC m=+3332.165416787" watchObservedRunningTime="2026-03-20 07:44:59.910196633 +0000 UTC m=+3332.169507834" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.170465 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7"] Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.172029 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.174280 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.174852 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.182077 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7"] Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.288636 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.288775 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzzpc\" (UniqueName: \"kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.288857 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.390082 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.390206 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzzpc\" (UniqueName: \"kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.392128 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.392183 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.403875 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.418310 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzzpc\" (UniqueName: \"kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.495515 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.945241 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7"] Mar 20 07:45:01 crc kubenswrapper[5136]: I0320 07:45:01.396276 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:45:01 crc kubenswrapper[5136]: E0320 07:45:01.396844 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:45:01 crc kubenswrapper[5136]: I0320 07:45:01.901365 5136 generic.go:334] "Generic (PLEG): container finished" podID="eb32e01f-d49f-4ba1-a1d4-c693765737e7" containerID="db23fd78398ebb125a153768bba0437d8fa09615fe8803585f26e9cdf330d2a9" exitCode=0 Mar 20 07:45:01 crc kubenswrapper[5136]: I0320 07:45:01.901423 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" event={"ID":"eb32e01f-d49f-4ba1-a1d4-c693765737e7","Type":"ContainerDied","Data":"db23fd78398ebb125a153768bba0437d8fa09615fe8803585f26e9cdf330d2a9"} Mar 20 07:45:01 crc kubenswrapper[5136]: I0320 07:45:01.901452 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" event={"ID":"eb32e01f-d49f-4ba1-a1d4-c693765737e7","Type":"ContainerStarted","Data":"e4cf3b4f21b7c971fc5586b70cdc1f18a9e7f6d1a0f3c953fd28b9795ef940c7"} Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.226009 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.229699 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzzpc\" (UniqueName: \"kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc\") pod \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.229757 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume\") pod \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.229874 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume\") pod \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.230416 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume" (OuterVolumeSpecName: "config-volume") pod "eb32e01f-d49f-4ba1-a1d4-c693765737e7" (UID: "eb32e01f-d49f-4ba1-a1d4-c693765737e7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.234261 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eb32e01f-d49f-4ba1-a1d4-c693765737e7" (UID: "eb32e01f-d49f-4ba1-a1d4-c693765737e7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.234518 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc" (OuterVolumeSpecName: "kube-api-access-vzzpc") pod "eb32e01f-d49f-4ba1-a1d4-c693765737e7" (UID: "eb32e01f-d49f-4ba1-a1d4-c693765737e7"). InnerVolumeSpecName "kube-api-access-vzzpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.330530 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzzpc\" (UniqueName: \"kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.330565 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.330579 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.916339 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" event={"ID":"eb32e01f-d49f-4ba1-a1d4-c693765737e7","Type":"ContainerDied","Data":"e4cf3b4f21b7c971fc5586b70cdc1f18a9e7f6d1a0f3c953fd28b9795ef940c7"} Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.916381 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4cf3b4f21b7c971fc5586b70cdc1f18a9e7f6d1a0f3c953fd28b9795ef940c7" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.916391 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:04 crc kubenswrapper[5136]: I0320 07:45:04.306522 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj"] Mar 20 07:45:04 crc kubenswrapper[5136]: I0320 07:45:04.314851 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj"] Mar 20 07:45:04 crc kubenswrapper[5136]: I0320 07:45:04.417521 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd400575-ef96-4721-b617-29c85991f7f0" path="/var/lib/kubelet/pods/cd400575-ef96-4721-b617-29c85991f7f0/volumes" Mar 20 07:45:06 crc kubenswrapper[5136]: I0320 07:45:06.296421 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:06 crc kubenswrapper[5136]: I0320 07:45:06.296486 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:06 crc kubenswrapper[5136]: I0320 07:45:06.405898 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:06 crc kubenswrapper[5136]: I0320 07:45:06.984716 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:07 crc kubenswrapper[5136]: I0320 07:45:07.035794 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:45:08 crc kubenswrapper[5136]: I0320 07:45:08.955899 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mzr5h" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="registry-server" containerID="cri-o://2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec" gracePeriod=2 Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.355048 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.441526 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities\") pod \"32051439-253c-4626-bc98-701985ff87cf\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.441712 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76lm9\" (UniqueName: \"kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9\") pod \"32051439-253c-4626-bc98-701985ff87cf\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.441795 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content\") pod \"32051439-253c-4626-bc98-701985ff87cf\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.442760 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities" (OuterVolumeSpecName: "utilities") pod "32051439-253c-4626-bc98-701985ff87cf" (UID: "32051439-253c-4626-bc98-701985ff87cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.443608 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.455155 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9" (OuterVolumeSpecName: "kube-api-access-76lm9") pod "32051439-253c-4626-bc98-701985ff87cf" (UID: "32051439-253c-4626-bc98-701985ff87cf"). InnerVolumeSpecName "kube-api-access-76lm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.510219 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32051439-253c-4626-bc98-701985ff87cf" (UID: "32051439-253c-4626-bc98-701985ff87cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.544619 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76lm9\" (UniqueName: \"kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.544651 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.966733 5136 generic.go:334] "Generic (PLEG): container finished" podID="32051439-253c-4626-bc98-701985ff87cf" containerID="2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec" exitCode=0 Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.966798 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerDied","Data":"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec"} Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.966849 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.966866 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerDied","Data":"b13ec984c4bc16d74f2087182c5cdff319920af93d300e004f263ca60bc836fb"} Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.966898 5136 scope.go:117] "RemoveContainer" containerID="2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.997318 5136 scope.go:117] "RemoveContainer" containerID="fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.024337 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.032285 5136 scope.go:117] "RemoveContainer" containerID="155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.033634 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.057468 5136 scope.go:117] "RemoveContainer" containerID="2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec" Mar 20 07:45:10 crc kubenswrapper[5136]: E0320 07:45:10.057798 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec\": container with ID starting with 2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec not found: ID does not exist" containerID="2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.057845 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec"} err="failed to get container status \"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec\": rpc error: code = NotFound desc = could not find container \"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec\": container with ID starting with 2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec not found: ID does not exist" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.057865 5136 scope.go:117] "RemoveContainer" containerID="fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20" Mar 20 07:45:10 crc kubenswrapper[5136]: E0320 07:45:10.058040 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20\": container with ID starting with fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20 not found: ID does not exist" containerID="fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.058060 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20"} err="failed to get container status \"fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20\": rpc error: code = NotFound desc = could not find container \"fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20\": container with ID starting with fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20 not found: ID does not exist" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.058073 5136 scope.go:117] "RemoveContainer" containerID="155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd" Mar 20 07:45:10 crc kubenswrapper[5136]: E0320 07:45:10.058494 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd\": container with ID starting with 155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd not found: ID does not exist" containerID="155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.058521 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd"} err="failed to get container status \"155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd\": rpc error: code = NotFound desc = could not find container \"155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd\": container with ID starting with 155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd not found: ID does not exist" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.455071 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32051439-253c-4626-bc98-701985ff87cf" path="/var/lib/kubelet/pods/32051439-253c-4626-bc98-701985ff87cf/volumes" Mar 20 07:45:14 crc kubenswrapper[5136]: I0320 07:45:14.398161 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:45:14 crc kubenswrapper[5136]: E0320 07:45:14.399398 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:45:29 crc kubenswrapper[5136]: I0320 07:45:29.397115 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:45:29 crc kubenswrapper[5136]: E0320 07:45:29.398030 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:45:34 crc kubenswrapper[5136]: I0320 07:45:34.578741 5136 scope.go:117] "RemoveContainer" containerID="3ae7890d536278f5580d52b91ca1ce94c8e1b0783ea4d154db2f9c059b03bba9" Mar 20 07:45:40 crc kubenswrapper[5136]: I0320 07:45:40.396165 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:45:40 crc kubenswrapper[5136]: E0320 07:45:40.396995 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.581221 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:45:46 crc kubenswrapper[5136]: E0320 07:45:46.582300 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb32e01f-d49f-4ba1-a1d4-c693765737e7" containerName="collect-profiles" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582333 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb32e01f-d49f-4ba1-a1d4-c693765737e7" containerName="collect-profiles" Mar 20 07:45:46 crc kubenswrapper[5136]: E0320 07:45:46.582366 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="extract-content" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582402 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="extract-content" Mar 20 07:45:46 crc kubenswrapper[5136]: E0320 07:45:46.582468 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="extract-utilities" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582533 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="extract-utilities" Mar 20 07:45:46 crc kubenswrapper[5136]: E0320 07:45:46.582558 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="registry-server" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582574 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="registry-server" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582925 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="registry-server" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582980 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb32e01f-d49f-4ba1-a1d4-c693765737e7" containerName="collect-profiles" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.585304 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.592259 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.697377 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfxs\" (UniqueName: \"kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.697464 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.697535 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.798658 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfxs\" (UniqueName: \"kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.799091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.799256 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.800295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.800410 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.835972 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfxs\" (UniqueName: \"kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.922790 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:47 crc kubenswrapper[5136]: I0320 07:45:47.366945 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:45:48 crc kubenswrapper[5136]: I0320 07:45:48.339961 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerID="6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3" exitCode=0 Mar 20 07:45:48 crc kubenswrapper[5136]: I0320 07:45:48.340019 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerDied","Data":"6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3"} Mar 20 07:45:48 crc kubenswrapper[5136]: I0320 07:45:48.340296 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerStarted","Data":"b8c67125bd359d999fbee971a3189826bd59dfe503f1312a57ccf93e170a140d"} Mar 20 07:45:49 crc kubenswrapper[5136]: I0320 07:45:49.348490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerStarted","Data":"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930"} Mar 20 07:45:50 crc kubenswrapper[5136]: I0320 07:45:50.360629 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerID="5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930" exitCode=0 Mar 20 07:45:50 crc kubenswrapper[5136]: I0320 07:45:50.360712 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerDied","Data":"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930"} Mar 20 07:45:51 crc kubenswrapper[5136]: I0320 07:45:51.397110 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:45:51 crc kubenswrapper[5136]: E0320 07:45:51.398046 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:45:52 crc kubenswrapper[5136]: I0320 07:45:52.375396 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerStarted","Data":"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4"} Mar 20 07:45:52 crc kubenswrapper[5136]: I0320 07:45:52.407901 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqkcz" podStartSLOduration=3.535086786 podStartE2EDuration="6.407881385s" podCreationTimestamp="2026-03-20 07:45:46 +0000 UTC" firstStartedPulling="2026-03-20 07:45:48.342182201 +0000 UTC m=+3380.601493362" lastFinishedPulling="2026-03-20 07:45:51.21497681 +0000 UTC m=+3383.474287961" observedRunningTime="2026-03-20 07:45:52.399069983 +0000 UTC m=+3384.658381144" watchObservedRunningTime="2026-03-20 07:45:52.407881385 +0000 UTC m=+3384.667192536" Mar 20 07:45:56 crc kubenswrapper[5136]: I0320 07:45:56.923615 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:56 crc kubenswrapper[5136]: I0320 07:45:56.927307 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:57 crc kubenswrapper[5136]: I0320 07:45:57.979623 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqkcz" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="registry-server" probeResult="failure" output=< Mar 20 07:45:57 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:45:57 crc kubenswrapper[5136]: > Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.166280 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566546-hbdmv"] Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.167298 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.169193 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.170465 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.170468 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.181572 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566546-hbdmv"] Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.299274 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkwr2\" (UniqueName: \"kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2\") pod \"auto-csr-approver-29566546-hbdmv\" (UID: \"d740b018-8653-4631-8138-93e535687c7b\") " pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.400654 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwr2\" (UniqueName: \"kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2\") pod \"auto-csr-approver-29566546-hbdmv\" (UID: \"d740b018-8653-4631-8138-93e535687c7b\") " pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.427622 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwr2\" (UniqueName: \"kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2\") pod \"auto-csr-approver-29566546-hbdmv\" (UID: \"d740b018-8653-4631-8138-93e535687c7b\") " pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.505111 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.942610 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566546-hbdmv"] Mar 20 07:46:01 crc kubenswrapper[5136]: I0320 07:46:01.464449 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" event={"ID":"d740b018-8653-4631-8138-93e535687c7b","Type":"ContainerStarted","Data":"13314b30b0adc87096e0c3c8ede51e24fbc40cfeed01b2fce0a0e6136dde31b3"} Mar 20 07:46:02 crc kubenswrapper[5136]: I0320 07:46:02.472103 5136 generic.go:334] "Generic (PLEG): container finished" podID="d740b018-8653-4631-8138-93e535687c7b" containerID="9bfd391ee5ff09e988d9f0f680d2e722fd7f235ba526ec5418b765f7a572ee8f" exitCode=0 Mar 20 07:46:02 crc kubenswrapper[5136]: I0320 07:46:02.472155 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" event={"ID":"d740b018-8653-4631-8138-93e535687c7b","Type":"ContainerDied","Data":"9bfd391ee5ff09e988d9f0f680d2e722fd7f235ba526ec5418b765f7a572ee8f"} Mar 20 07:46:03 crc kubenswrapper[5136]: I0320 07:46:03.720044 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:03 crc kubenswrapper[5136]: I0320 07:46:03.850145 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkwr2\" (UniqueName: \"kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2\") pod \"d740b018-8653-4631-8138-93e535687c7b\" (UID: \"d740b018-8653-4631-8138-93e535687c7b\") " Mar 20 07:46:03 crc kubenswrapper[5136]: I0320 07:46:03.855355 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2" (OuterVolumeSpecName: "kube-api-access-fkwr2") pod "d740b018-8653-4631-8138-93e535687c7b" (UID: "d740b018-8653-4631-8138-93e535687c7b"). InnerVolumeSpecName "kube-api-access-fkwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:46:03 crc kubenswrapper[5136]: I0320 07:46:03.951561 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkwr2\" (UniqueName: \"kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2\") on node \"crc\" DevicePath \"\"" Mar 20 07:46:04 crc kubenswrapper[5136]: I0320 07:46:04.487084 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" event={"ID":"d740b018-8653-4631-8138-93e535687c7b","Type":"ContainerDied","Data":"13314b30b0adc87096e0c3c8ede51e24fbc40cfeed01b2fce0a0e6136dde31b3"} Mar 20 07:46:04 crc kubenswrapper[5136]: I0320 07:46:04.487153 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13314b30b0adc87096e0c3c8ede51e24fbc40cfeed01b2fce0a0e6136dde31b3" Mar 20 07:46:04 crc kubenswrapper[5136]: I0320 07:46:04.487110 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:04 crc kubenswrapper[5136]: I0320 07:46:04.808663 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566540-q2mqq"] Mar 20 07:46:04 crc kubenswrapper[5136]: I0320 07:46:04.816108 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566540-q2mqq"] Mar 20 07:46:05 crc kubenswrapper[5136]: I0320 07:46:05.396901 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:46:05 crc kubenswrapper[5136]: E0320 07:46:05.397143 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:46:06 crc kubenswrapper[5136]: I0320 07:46:06.408382 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ab4686-525c-4931-93d9-5b71ec6644ee" path="/var/lib/kubelet/pods/13ab4686-525c-4931-93d9-5b71ec6644ee/volumes" Mar 20 07:46:06 crc kubenswrapper[5136]: I0320 07:46:06.994538 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:46:07 crc kubenswrapper[5136]: I0320 07:46:07.067975 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:46:07 crc kubenswrapper[5136]: I0320 07:46:07.267119 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:46:08 crc kubenswrapper[5136]: I0320 07:46:08.515490 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqkcz" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="registry-server" containerID="cri-o://bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4" gracePeriod=2 Mar 20 07:46:08 crc kubenswrapper[5136]: I0320 07:46:08.881030 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.023106 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content\") pod \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.023158 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities\") pod \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.023222 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwfxs\" (UniqueName: \"kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs\") pod \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.024440 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities" (OuterVolumeSpecName: "utilities") pod "ccc42a65-cfdd-4b03-aecb-404be7591cfb" (UID: "ccc42a65-cfdd-4b03-aecb-404be7591cfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.030870 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs" (OuterVolumeSpecName: "kube-api-access-pwfxs") pod "ccc42a65-cfdd-4b03-aecb-404be7591cfb" (UID: "ccc42a65-cfdd-4b03-aecb-404be7591cfb"). InnerVolumeSpecName "kube-api-access-pwfxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.125328 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.125369 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwfxs\" (UniqueName: \"kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs\") on node \"crc\" DevicePath \"\"" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.148849 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccc42a65-cfdd-4b03-aecb-404be7591cfb" (UID: "ccc42a65-cfdd-4b03-aecb-404be7591cfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.226185 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.527039 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerID="bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4" exitCode=0 Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.527080 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerDied","Data":"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4"} Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.527109 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerDied","Data":"b8c67125bd359d999fbee971a3189826bd59dfe503f1312a57ccf93e170a140d"} Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.527116 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.527127 5136 scope.go:117] "RemoveContainer" containerID="bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.548940 5136 scope.go:117] "RemoveContainer" containerID="5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.582373 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.586027 5136 scope.go:117] "RemoveContainer" containerID="6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.592937 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.608289 5136 scope.go:117] "RemoveContainer" containerID="bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4" Mar 20 07:46:09 crc kubenswrapper[5136]: E0320 07:46:09.609356 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4\": container with ID starting with bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4 not found: ID does not exist" containerID="bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.609396 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4"} err="failed to get container status \"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4\": rpc error: code = NotFound desc = could not find container \"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4\": container with ID starting with bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4 not found: ID does not exist" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.609420 5136 scope.go:117] "RemoveContainer" containerID="5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930" Mar 20 07:46:09 crc kubenswrapper[5136]: E0320 07:46:09.609792 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930\": container with ID starting with 5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930 not found: ID does not exist" containerID="5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.609980 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930"} err="failed to get container status \"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930\": rpc error: code = NotFound desc = could not find container \"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930\": container with ID starting with 5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930 not found: ID does not exist" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.610011 5136 scope.go:117] "RemoveContainer" containerID="6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3" Mar 20 07:46:09 crc kubenswrapper[5136]: E0320 07:46:09.610374 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3\": container with ID starting with 6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3 not found: ID does not exist" containerID="6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.610424 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3"} err="failed to get container status \"6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3\": rpc error: code = NotFound desc = could not find container \"6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3\": container with ID starting with 6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3 not found: ID does not exist" Mar 20 07:46:10 crc kubenswrapper[5136]: I0320 07:46:10.405742 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" path="/var/lib/kubelet/pods/ccc42a65-cfdd-4b03-aecb-404be7591cfb/volumes" Mar 20 07:46:17 crc kubenswrapper[5136]: I0320 07:46:17.396403 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:46:17 crc kubenswrapper[5136]: E0320 07:46:17.397204 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:46:32 crc kubenswrapper[5136]: I0320 07:46:32.396416 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:46:32 crc kubenswrapper[5136]: E0320 07:46:32.396982 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:46:34 crc kubenswrapper[5136]: I0320 07:46:34.645615 5136 scope.go:117] "RemoveContainer" containerID="252260f3a58979042bf8b21321cd53a2147f00019a6012d4dfcab45147ceb6a9" Mar 20 07:46:44 crc kubenswrapper[5136]: I0320 07:46:44.396467 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:46:44 crc kubenswrapper[5136]: E0320 07:46:44.397443 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:46:58 crc kubenswrapper[5136]: I0320 07:46:58.404182 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:46:58 crc kubenswrapper[5136]: E0320 07:46:58.404981 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:47:13 crc kubenswrapper[5136]: I0320 07:47:13.397475 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:47:13 crc kubenswrapper[5136]: E0320 07:47:13.398374 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:47:24 crc kubenswrapper[5136]: I0320 07:47:24.397052 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:47:24 crc kubenswrapper[5136]: E0320 07:47:24.398004 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:47:35 crc kubenswrapper[5136]: I0320 07:47:35.397012 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:47:35 crc kubenswrapper[5136]: E0320 07:47:35.398529 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:47:50 crc kubenswrapper[5136]: I0320 07:47:50.396975 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:47:50 crc kubenswrapper[5136]: E0320 07:47:50.397750 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.167657 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566548-bbghf"] Mar 20 07:48:00 crc kubenswrapper[5136]: E0320 07:48:00.168767 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="registry-server" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.168790 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="registry-server" Mar 20 07:48:00 crc kubenswrapper[5136]: E0320 07:48:00.168836 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d740b018-8653-4631-8138-93e535687c7b" containerName="oc" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.168852 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d740b018-8653-4631-8138-93e535687c7b" containerName="oc" Mar 20 07:48:00 crc kubenswrapper[5136]: E0320 07:48:00.168882 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="extract-content" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.168896 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="extract-content" Mar 20 07:48:00 crc kubenswrapper[5136]: E0320 07:48:00.168930 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="extract-utilities" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.168944 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="extract-utilities" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.169213 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="registry-server" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.169251 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d740b018-8653-4631-8138-93e535687c7b" containerName="oc" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.172082 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.174742 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.175119 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.175336 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.179107 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566548-bbghf"] Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.183899 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgmzn\" (UniqueName: \"kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn\") pod \"auto-csr-approver-29566548-bbghf\" (UID: \"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66\") " pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.285069 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgmzn\" (UniqueName: \"kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn\") pod \"auto-csr-approver-29566548-bbghf\" (UID: \"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66\") " pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.307941 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgmzn\" (UniqueName: \"kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn\") pod \"auto-csr-approver-29566548-bbghf\" (UID: \"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66\") " pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.500425 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.965845 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566548-bbghf"] Mar 20 07:48:01 crc kubenswrapper[5136]: I0320 07:48:01.471991 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566548-bbghf" event={"ID":"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66","Type":"ContainerStarted","Data":"1433de0b16703d4f4d1a60e645bfe424883afa10f19be31feb9b5b1ff0ed4a28"} Mar 20 07:48:02 crc kubenswrapper[5136]: I0320 07:48:02.479874 5136 generic.go:334] "Generic (PLEG): container finished" podID="eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" containerID="eda4db7731b82a54ef6f8997e413d44c2ceb0549c49bbb5b7671591ccebd691e" exitCode=0 Mar 20 07:48:02 crc kubenswrapper[5136]: I0320 07:48:02.479941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566548-bbghf" event={"ID":"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66","Type":"ContainerDied","Data":"eda4db7731b82a54ef6f8997e413d44c2ceb0549c49bbb5b7671591ccebd691e"} Mar 20 07:48:03 crc kubenswrapper[5136]: I0320 07:48:03.396521 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:48:03 crc kubenswrapper[5136]: E0320 07:48:03.396752 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:48:03 crc kubenswrapper[5136]: I0320 07:48:03.801231 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:03 crc kubenswrapper[5136]: I0320 07:48:03.841500 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgmzn\" (UniqueName: \"kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn\") pod \"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66\" (UID: \"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66\") " Mar 20 07:48:03 crc kubenswrapper[5136]: I0320 07:48:03.847430 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn" (OuterVolumeSpecName: "kube-api-access-jgmzn") pod "eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" (UID: "eae2b10f-99a8-4ada-a8fb-d674d6e2dc66"). InnerVolumeSpecName "kube-api-access-jgmzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:48:03 crc kubenswrapper[5136]: I0320 07:48:03.943174 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgmzn\" (UniqueName: \"kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn\") on node \"crc\" DevicePath \"\"" Mar 20 07:48:04 crc kubenswrapper[5136]: I0320 07:48:04.496939 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566548-bbghf" event={"ID":"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66","Type":"ContainerDied","Data":"1433de0b16703d4f4d1a60e645bfe424883afa10f19be31feb9b5b1ff0ed4a28"} Mar 20 07:48:04 crc kubenswrapper[5136]: I0320 07:48:04.496976 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1433de0b16703d4f4d1a60e645bfe424883afa10f19be31feb9b5b1ff0ed4a28" Mar 20 07:48:04 crc kubenswrapper[5136]: I0320 07:48:04.497019 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:04 crc kubenswrapper[5136]: I0320 07:48:04.881154 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566542-pds9m"] Mar 20 07:48:04 crc kubenswrapper[5136]: I0320 07:48:04.887804 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566542-pds9m"] Mar 20 07:48:06 crc kubenswrapper[5136]: I0320 07:48:06.413589 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d864d8-8238-4e66-b9ac-d03d95596254" path="/var/lib/kubelet/pods/17d864d8-8238-4e66-b9ac-d03d95596254/volumes" Mar 20 07:48:16 crc kubenswrapper[5136]: I0320 07:48:16.397888 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:48:17 crc kubenswrapper[5136]: I0320 07:48:17.597588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55"} Mar 20 07:48:34 crc kubenswrapper[5136]: I0320 07:48:34.746838 5136 scope.go:117] "RemoveContainer" containerID="7d44df1c73e9c1d9108526abbe2353b5337e03d920bac4de2652a37d15133fc6" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.171065 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566550-nxn8z"] Mar 20 07:50:00 crc kubenswrapper[5136]: E0320 07:50:00.172282 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" containerName="oc" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.172306 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" containerName="oc" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.172648 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" containerName="oc" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.173443 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.176213 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.176773 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.177455 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.182493 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566550-nxn8z"] Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.227556 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgpk\" (UniqueName: \"kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk\") pod \"auto-csr-approver-29566550-nxn8z\" (UID: \"9e892786-304f-4449-8303-227a30b2af0c\") " pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.328760 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmgpk\" (UniqueName: \"kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk\") pod \"auto-csr-approver-29566550-nxn8z\" (UID: \"9e892786-304f-4449-8303-227a30b2af0c\") " pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.352191 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmgpk\" (UniqueName: \"kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk\") pod \"auto-csr-approver-29566550-nxn8z\" (UID: \"9e892786-304f-4449-8303-227a30b2af0c\") " pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.515028 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:01 crc kubenswrapper[5136]: I0320 07:50:00.954705 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566550-nxn8z"] Mar 20 07:50:01 crc kubenswrapper[5136]: I0320 07:50:00.962740 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:50:01 crc kubenswrapper[5136]: I0320 07:50:01.850696 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" event={"ID":"9e892786-304f-4449-8303-227a30b2af0c","Type":"ContainerStarted","Data":"438cc651f5e5b76703c57ebcad93173aa9662ff0fd2db4df85be5a41bf39a7ef"} Mar 20 07:50:02 crc kubenswrapper[5136]: I0320 07:50:02.861454 5136 generic.go:334] "Generic (PLEG): container finished" podID="9e892786-304f-4449-8303-227a30b2af0c" containerID="fbf51c0f85e48cadf70be318d12d8502f4a21ef24eddd694f9b07eebf9064ae5" exitCode=0 Mar 20 07:50:02 crc kubenswrapper[5136]: I0320 07:50:02.861539 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" event={"ID":"9e892786-304f-4449-8303-227a30b2af0c","Type":"ContainerDied","Data":"fbf51c0f85e48cadf70be318d12d8502f4a21ef24eddd694f9b07eebf9064ae5"} Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.171643 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.285333 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmgpk\" (UniqueName: \"kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk\") pod \"9e892786-304f-4449-8303-227a30b2af0c\" (UID: \"9e892786-304f-4449-8303-227a30b2af0c\") " Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.291642 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk" (OuterVolumeSpecName: "kube-api-access-wmgpk") pod "9e892786-304f-4449-8303-227a30b2af0c" (UID: "9e892786-304f-4449-8303-227a30b2af0c"). InnerVolumeSpecName "kube-api-access-wmgpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.387578 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmgpk\" (UniqueName: \"kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk\") on node \"crc\" DevicePath \"\"" Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.887298 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" event={"ID":"9e892786-304f-4449-8303-227a30b2af0c","Type":"ContainerDied","Data":"438cc651f5e5b76703c57ebcad93173aa9662ff0fd2db4df85be5a41bf39a7ef"} Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.887362 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="438cc651f5e5b76703c57ebcad93173aa9662ff0fd2db4df85be5a41bf39a7ef" Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.887390 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:05 crc kubenswrapper[5136]: I0320 07:50:05.266074 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566544-2kjwj"] Mar 20 07:50:05 crc kubenswrapper[5136]: I0320 07:50:05.274138 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566544-2kjwj"] Mar 20 07:50:06 crc kubenswrapper[5136]: I0320 07:50:06.411936 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c6802e-62e8-47ba-b964-fde9f92ca8ef" path="/var/lib/kubelet/pods/26c6802e-62e8-47ba-b964-fde9f92ca8ef/volumes" Mar 20 07:50:34 crc kubenswrapper[5136]: I0320 07:50:34.831724 5136 scope.go:117] "RemoveContainer" containerID="340e29815927db9adaf364543d249649b4c4d562d5c4326419747f3242c8e07d" Mar 20 07:50:45 crc kubenswrapper[5136]: I0320 07:50:45.822599 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:50:45 crc kubenswrapper[5136]: I0320 07:50:45.823981 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:51:15 crc kubenswrapper[5136]: I0320 07:51:15.822189 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:51:15 crc kubenswrapper[5136]: I0320 07:51:15.822685 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:51:45 crc kubenswrapper[5136]: I0320 07:51:45.822477 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:51:45 crc kubenswrapper[5136]: I0320 07:51:45.823457 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:51:45 crc kubenswrapper[5136]: I0320 07:51:45.823540 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:51:45 crc kubenswrapper[5136]: I0320 07:51:45.825257 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:51:45 crc kubenswrapper[5136]: I0320 07:51:45.825330 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55" gracePeriod=600 Mar 20 07:51:46 crc kubenswrapper[5136]: I0320 07:51:46.388418 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55" exitCode=0 Mar 20 07:51:46 crc kubenswrapper[5136]: I0320 07:51:46.388486 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55"} Mar 20 07:51:46 crc kubenswrapper[5136]: I0320 07:51:46.388693 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e"} Mar 20 07:51:46 crc kubenswrapper[5136]: I0320 07:51:46.388709 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.159570 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566552-45w99"] Mar 20 07:52:00 crc kubenswrapper[5136]: E0320 07:52:00.160884 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e892786-304f-4449-8303-227a30b2af0c" containerName="oc" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.160907 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e892786-304f-4449-8303-227a30b2af0c" containerName="oc" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.161117 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e892786-304f-4449-8303-227a30b2af0c" containerName="oc" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.163783 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.167576 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.167906 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.168227 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.171965 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566552-45w99"] Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.245889 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frbpk\" (UniqueName: \"kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk\") pod \"auto-csr-approver-29566552-45w99\" (UID: \"680d027e-ec7b-41fa-928c-826f0968c6f2\") " pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.347524 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frbpk\" (UniqueName: \"kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk\") pod \"auto-csr-approver-29566552-45w99\" (UID: \"680d027e-ec7b-41fa-928c-826f0968c6f2\") " pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.379255 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frbpk\" (UniqueName: \"kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk\") pod \"auto-csr-approver-29566552-45w99\" (UID: \"680d027e-ec7b-41fa-928c-826f0968c6f2\") " pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.492238 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.954577 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566552-45w99"] Mar 20 07:52:00 crc kubenswrapper[5136]: W0320 07:52:00.972005 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod680d027e_ec7b_41fa_928c_826f0968c6f2.slice/crio-c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9 WatchSource:0}: Error finding container c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9: Status 404 returned error can't find the container with id c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9 Mar 20 07:52:01 crc kubenswrapper[5136]: I0320 07:52:01.541093 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566552-45w99" event={"ID":"680d027e-ec7b-41fa-928c-826f0968c6f2","Type":"ContainerStarted","Data":"c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9"} Mar 20 07:52:02 crc kubenswrapper[5136]: I0320 07:52:02.554718 5136 generic.go:334] "Generic (PLEG): container finished" podID="680d027e-ec7b-41fa-928c-826f0968c6f2" containerID="850f029af670145399cc93675607b8410dbf5d367cbba9e2397a2a62aff8327a" exitCode=0 Mar 20 07:52:02 crc kubenswrapper[5136]: I0320 07:52:02.554844 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566552-45w99" event={"ID":"680d027e-ec7b-41fa-928c-826f0968c6f2","Type":"ContainerDied","Data":"850f029af670145399cc93675607b8410dbf5d367cbba9e2397a2a62aff8327a"} Mar 20 07:52:03 crc kubenswrapper[5136]: I0320 07:52:03.881767 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.002326 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frbpk\" (UniqueName: \"kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk\") pod \"680d027e-ec7b-41fa-928c-826f0968c6f2\" (UID: \"680d027e-ec7b-41fa-928c-826f0968c6f2\") " Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.012507 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk" (OuterVolumeSpecName: "kube-api-access-frbpk") pod "680d027e-ec7b-41fa-928c-826f0968c6f2" (UID: "680d027e-ec7b-41fa-928c-826f0968c6f2"). InnerVolumeSpecName "kube-api-access-frbpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.104612 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frbpk\" (UniqueName: \"kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk\") on node \"crc\" DevicePath \"\"" Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.580199 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566552-45w99" event={"ID":"680d027e-ec7b-41fa-928c-826f0968c6f2","Type":"ContainerDied","Data":"c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9"} Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.580292 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9" Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.580308 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.973488 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566546-hbdmv"] Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.984665 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566546-hbdmv"] Mar 20 07:52:06 crc kubenswrapper[5136]: I0320 07:52:06.408720 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d740b018-8653-4631-8138-93e535687c7b" path="/var/lib/kubelet/pods/d740b018-8653-4631-8138-93e535687c7b/volumes" Mar 20 07:52:34 crc kubenswrapper[5136]: I0320 07:52:34.930414 5136 scope.go:117] "RemoveContainer" containerID="9bfd391ee5ff09e988d9f0f680d2e722fd7f235ba526ec5418b765f7a572ee8f" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.631117 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 07:53:16 crc kubenswrapper[5136]: E0320 07:53:16.632343 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680d027e-ec7b-41fa-928c-826f0968c6f2" containerName="oc" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.632372 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="680d027e-ec7b-41fa-928c-826f0968c6f2" containerName="oc" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.632667 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="680d027e-ec7b-41fa-928c-826f0968c6f2" containerName="oc" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.634342 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.646218 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.680951 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.681014 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.681081 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92f6b\" (UniqueName: \"kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.782228 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.782310 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.782388 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92f6b\" (UniqueName: \"kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.782768 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.782833 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.813577 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92f6b\" (UniqueName: \"kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.962833 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:17 crc kubenswrapper[5136]: I0320 07:53:17.466642 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 07:53:18 crc kubenswrapper[5136]: I0320 07:53:18.174472 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerID="360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677" exitCode=0 Mar 20 07:53:18 crc kubenswrapper[5136]: I0320 07:53:18.174523 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerDied","Data":"360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677"} Mar 20 07:53:18 crc kubenswrapper[5136]: I0320 07:53:18.174729 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerStarted","Data":"cffe9a0608630e68ebe445b4b73ca250c67588ca57233ed2a5a8f8aeafc8a8ef"} Mar 20 07:53:22 crc kubenswrapper[5136]: E0320 07:53:22.988332 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd7d7add_fc30_4efd_96dc_b253a6fd1b8b.slice/crio-72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd.scope\": RecentStats: unable to find data in memory cache]" Mar 20 07:53:23 crc kubenswrapper[5136]: I0320 07:53:23.212466 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerID="72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd" exitCode=0 Mar 20 07:53:23 crc kubenswrapper[5136]: I0320 07:53:23.212513 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerDied","Data":"72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd"} Mar 20 07:53:24 crc kubenswrapper[5136]: I0320 07:53:24.221361 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerStarted","Data":"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b"} Mar 20 07:53:24 crc kubenswrapper[5136]: I0320 07:53:24.246469 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dmmv5" podStartSLOduration=2.673528357 podStartE2EDuration="8.246451667s" podCreationTimestamp="2026-03-20 07:53:16 +0000 UTC" firstStartedPulling="2026-03-20 07:53:18.176676216 +0000 UTC m=+3830.435987367" lastFinishedPulling="2026-03-20 07:53:23.749599536 +0000 UTC m=+3836.008910677" observedRunningTime="2026-03-20 07:53:24.24527915 +0000 UTC m=+3836.504590301" watchObservedRunningTime="2026-03-20 07:53:24.246451667 +0000 UTC m=+3836.505762818" Mar 20 07:53:26 crc kubenswrapper[5136]: I0320 07:53:26.963144 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:26 crc kubenswrapper[5136]: I0320 07:53:26.963442 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:27 crc kubenswrapper[5136]: I0320 07:53:27.006437 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.743511 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.745442 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.764263 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.838379 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fnf5\" (UniqueName: \"kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.838439 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.838461 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.939576 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fnf5\" (UniqueName: \"kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.940021 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.940068 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.940752 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.940805 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.964157 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fnf5\" (UniqueName: \"kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:37 crc kubenswrapper[5136]: I0320 07:53:37.023559 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:37 crc kubenswrapper[5136]: I0320 07:53:37.078560 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:37 crc kubenswrapper[5136]: I0320 07:53:37.503668 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:38 crc kubenswrapper[5136]: I0320 07:53:38.313869 5136 generic.go:334] "Generic (PLEG): container finished" podID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerID="d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c" exitCode=0 Mar 20 07:53:38 crc kubenswrapper[5136]: I0320 07:53:38.313973 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerDied","Data":"d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c"} Mar 20 07:53:38 crc kubenswrapper[5136]: I0320 07:53:38.314162 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerStarted","Data":"b2ec866fc23c1118a0f35606e64cb40fc45b5f6c1417cf766a02b469d9ead578"} Mar 20 07:53:39 crc kubenswrapper[5136]: I0320 07:53:39.321699 5136 generic.go:334] "Generic (PLEG): container finished" podID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerID="6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe" exitCode=0 Mar 20 07:53:39 crc kubenswrapper[5136]: I0320 07:53:39.321735 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerDied","Data":"6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe"} Mar 20 07:53:41 crc kubenswrapper[5136]: I0320 07:53:41.340287 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerStarted","Data":"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea"} Mar 20 07:53:41 crc kubenswrapper[5136]: I0320 07:53:41.357450 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nnnhv" podStartSLOduration=3.9463824709999997 podStartE2EDuration="5.357434608s" podCreationTimestamp="2026-03-20 07:53:36 +0000 UTC" firstStartedPulling="2026-03-20 07:53:38.315562773 +0000 UTC m=+3850.574873924" lastFinishedPulling="2026-03-20 07:53:39.72661491 +0000 UTC m=+3851.985926061" observedRunningTime="2026-03-20 07:53:41.356419397 +0000 UTC m=+3853.615730558" watchObservedRunningTime="2026-03-20 07:53:41.357434608 +0000 UTC m=+3853.616745749" Mar 20 07:53:41 crc kubenswrapper[5136]: I0320 07:53:41.962133 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 07:53:42 crc kubenswrapper[5136]: I0320 07:53:42.740451 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 07:53:42 crc kubenswrapper[5136]: I0320 07:53:42.740816 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qfgkr" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="registry-server" containerID="cri-o://613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871" gracePeriod=2 Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.173899 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.229638 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6x6v\" (UniqueName: \"kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v\") pod \"e1d2d341-1694-4f55-860a-46b11bac80c8\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.229715 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content\") pod \"e1d2d341-1694-4f55-860a-46b11bac80c8\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.229790 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities\") pod \"e1d2d341-1694-4f55-860a-46b11bac80c8\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.230469 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities" (OuterVolumeSpecName: "utilities") pod "e1d2d341-1694-4f55-860a-46b11bac80c8" (UID: "e1d2d341-1694-4f55-860a-46b11bac80c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.245994 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v" (OuterVolumeSpecName: "kube-api-access-r6x6v") pod "e1d2d341-1694-4f55-860a-46b11bac80c8" (UID: "e1d2d341-1694-4f55-860a-46b11bac80c8"). InnerVolumeSpecName "kube-api-access-r6x6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.317479 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1d2d341-1694-4f55-860a-46b11bac80c8" (UID: "e1d2d341-1694-4f55-860a-46b11bac80c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.331202 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.331239 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6x6v\" (UniqueName: \"kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.331255 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.355526 5136 generic.go:334] "Generic (PLEG): container finished" podID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerID="613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871" exitCode=0 Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.355564 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerDied","Data":"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871"} Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.355587 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerDied","Data":"a5974c459c2386be53440ce7c343a53cc05081c5b95afdcf1296d6de5d8a6e98"} Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.355603 5136 scope.go:117] "RemoveContainer" containerID="613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.355702 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.390466 5136 scope.go:117] "RemoveContainer" containerID="6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.404859 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.411210 5136 scope.go:117] "RemoveContainer" containerID="6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.412320 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.433503 5136 scope.go:117] "RemoveContainer" containerID="613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871" Mar 20 07:53:43 crc kubenswrapper[5136]: E0320 07:53:43.435065 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871\": container with ID starting with 613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871 not found: ID does not exist" containerID="613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.435105 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871"} err="failed to get container status \"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871\": rpc error: code = NotFound desc = could not find container \"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871\": container with ID starting with 613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871 not found: ID does not exist" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.435136 5136 scope.go:117] "RemoveContainer" containerID="6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697" Mar 20 07:53:43 crc kubenswrapper[5136]: E0320 07:53:43.436303 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697\": container with ID starting with 6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697 not found: ID does not exist" containerID="6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.436344 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697"} err="failed to get container status \"6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697\": rpc error: code = NotFound desc = could not find container \"6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697\": container with ID starting with 6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697 not found: ID does not exist" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.436370 5136 scope.go:117] "RemoveContainer" containerID="6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c" Mar 20 07:53:43 crc kubenswrapper[5136]: E0320 07:53:43.436673 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c\": container with ID starting with 6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c not found: ID does not exist" containerID="6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.436694 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c"} err="failed to get container status \"6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c\": rpc error: code = NotFound desc = could not find container \"6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c\": container with ID starting with 6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c not found: ID does not exist" Mar 20 07:53:44 crc kubenswrapper[5136]: I0320 07:53:44.404357 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" path="/var/lib/kubelet/pods/e1d2d341-1694-4f55-860a-46b11bac80c8/volumes" Mar 20 07:53:47 crc kubenswrapper[5136]: I0320 07:53:47.078738 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:47 crc kubenswrapper[5136]: I0320 07:53:47.079264 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:47 crc kubenswrapper[5136]: I0320 07:53:47.139143 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:47 crc kubenswrapper[5136]: I0320 07:53:47.452718 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:49 crc kubenswrapper[5136]: I0320 07:53:49.391707 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:49 crc kubenswrapper[5136]: I0320 07:53:49.407439 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nnnhv" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="registry-server" containerID="cri-o://5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea" gracePeriod=2 Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.337666 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.396750 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content\") pod \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.396960 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities\") pod \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.397006 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fnf5\" (UniqueName: \"kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5\") pod \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.398174 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities" (OuterVolumeSpecName: "utilities") pod "89142574-8ae9-43b8-b0d1-9d6f6ede9e56" (UID: "89142574-8ae9-43b8-b0d1-9d6f6ede9e56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.405631 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5" (OuterVolumeSpecName: "kube-api-access-7fnf5") pod "89142574-8ae9-43b8-b0d1-9d6f6ede9e56" (UID: "89142574-8ae9-43b8-b0d1-9d6f6ede9e56"). InnerVolumeSpecName "kube-api-access-7fnf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.414382 5136 generic.go:334] "Generic (PLEG): container finished" podID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerID="5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea" exitCode=0 Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.414474 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.436509 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerDied","Data":"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea"} Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.436581 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerDied","Data":"b2ec866fc23c1118a0f35606e64cb40fc45b5f6c1417cf766a02b469d9ead578"} Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.436625 5136 scope.go:117] "RemoveContainer" containerID="5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.451426 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89142574-8ae9-43b8-b0d1-9d6f6ede9e56" (UID: "89142574-8ae9-43b8-b0d1-9d6f6ede9e56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.459909 5136 scope.go:117] "RemoveContainer" containerID="6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.481242 5136 scope.go:117] "RemoveContainer" containerID="d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.498943 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.498994 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.499014 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fnf5\" (UniqueName: \"kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.509763 5136 scope.go:117] "RemoveContainer" containerID="5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea" Mar 20 07:53:50 crc kubenswrapper[5136]: E0320 07:53:50.510681 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea\": container with ID starting with 5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea not found: ID does not exist" containerID="5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.510753 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea"} err="failed to get container status \"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea\": rpc error: code = NotFound desc = could not find container \"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea\": container with ID starting with 5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea not found: ID does not exist" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.510799 5136 scope.go:117] "RemoveContainer" containerID="6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe" Mar 20 07:53:50 crc kubenswrapper[5136]: E0320 07:53:50.511351 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe\": container with ID starting with 6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe not found: ID does not exist" containerID="6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.511396 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe"} err="failed to get container status \"6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe\": rpc error: code = NotFound desc = could not find container \"6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe\": container with ID starting with 6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe not found: ID does not exist" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.511426 5136 scope.go:117] "RemoveContainer" containerID="d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c" Mar 20 07:53:50 crc kubenswrapper[5136]: E0320 07:53:50.511719 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c\": container with ID starting with d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c not found: ID does not exist" containerID="d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.511755 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c"} err="failed to get container status \"d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c\": rpc error: code = NotFound desc = could not find container \"d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c\": container with ID starting with d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c not found: ID does not exist" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.750713 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.756035 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:52 crc kubenswrapper[5136]: I0320 07:53:52.415995 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" path="/var/lib/kubelet/pods/89142574-8ae9-43b8-b0d1-9d6f6ede9e56/volumes" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.156681 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566554-rhdhc"] Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157642 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157659 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157676 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="extract-content" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157686 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="extract-content" Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157701 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="extract-utilities" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157711 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="extract-utilities" Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157724 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157733 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157749 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="extract-utilities" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157756 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="extract-utilities" Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157777 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="extract-content" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157785 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="extract-content" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157964 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157988 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.158492 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.161456 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.162294 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.162507 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.165341 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566554-rhdhc"] Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.238914 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhj6j\" (UniqueName: \"kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j\") pod \"auto-csr-approver-29566554-rhdhc\" (UID: \"b14c729c-040e-40a8-90bd-6310cf18d489\") " pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.340867 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhj6j\" (UniqueName: \"kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j\") pod \"auto-csr-approver-29566554-rhdhc\" (UID: \"b14c729c-040e-40a8-90bd-6310cf18d489\") " pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.372471 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhj6j\" (UniqueName: \"kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j\") pod \"auto-csr-approver-29566554-rhdhc\" (UID: \"b14c729c-040e-40a8-90bd-6310cf18d489\") " pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.480297 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.978856 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566554-rhdhc"] Mar 20 07:54:01 crc kubenswrapper[5136]: I0320 07:54:01.515775 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" event={"ID":"b14c729c-040e-40a8-90bd-6310cf18d489","Type":"ContainerStarted","Data":"d73ed75b4c4128e790ebd50d7f1ab0977dae93e0ef82580373c14c2d1ca8c84f"} Mar 20 07:54:02 crc kubenswrapper[5136]: I0320 07:54:02.526400 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" event={"ID":"b14c729c-040e-40a8-90bd-6310cf18d489","Type":"ContainerStarted","Data":"2bdd42fe67ed84f38314c250e3cb1b39e3c5ab87ea5a8e75695c9bc28f0ae60d"} Mar 20 07:54:02 crc kubenswrapper[5136]: I0320 07:54:02.541227 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" podStartSLOduration=1.37205477 podStartE2EDuration="2.541208693s" podCreationTimestamp="2026-03-20 07:54:00 +0000 UTC" firstStartedPulling="2026-03-20 07:54:00.976321669 +0000 UTC m=+3873.235632840" lastFinishedPulling="2026-03-20 07:54:02.145475572 +0000 UTC m=+3874.404786763" observedRunningTime="2026-03-20 07:54:02.536499958 +0000 UTC m=+3874.795811129" watchObservedRunningTime="2026-03-20 07:54:02.541208693 +0000 UTC m=+3874.800519844" Mar 20 07:54:03 crc kubenswrapper[5136]: I0320 07:54:03.536763 5136 generic.go:334] "Generic (PLEG): container finished" podID="b14c729c-040e-40a8-90bd-6310cf18d489" containerID="2bdd42fe67ed84f38314c250e3cb1b39e3c5ab87ea5a8e75695c9bc28f0ae60d" exitCode=0 Mar 20 07:54:03 crc kubenswrapper[5136]: I0320 07:54:03.536844 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" event={"ID":"b14c729c-040e-40a8-90bd-6310cf18d489","Type":"ContainerDied","Data":"2bdd42fe67ed84f38314c250e3cb1b39e3c5ab87ea5a8e75695c9bc28f0ae60d"} Mar 20 07:54:04 crc kubenswrapper[5136]: I0320 07:54:04.863737 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.003619 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhj6j\" (UniqueName: \"kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j\") pod \"b14c729c-040e-40a8-90bd-6310cf18d489\" (UID: \"b14c729c-040e-40a8-90bd-6310cf18d489\") " Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.010273 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j" (OuterVolumeSpecName: "kube-api-access-xhj6j") pod "b14c729c-040e-40a8-90bd-6310cf18d489" (UID: "b14c729c-040e-40a8-90bd-6310cf18d489"). InnerVolumeSpecName "kube-api-access-xhj6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.105571 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhj6j\" (UniqueName: \"kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j\") on node \"crc\" DevicePath \"\"" Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.555979 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" event={"ID":"b14c729c-040e-40a8-90bd-6310cf18d489","Type":"ContainerDied","Data":"d73ed75b4c4128e790ebd50d7f1ab0977dae93e0ef82580373c14c2d1ca8c84f"} Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.556021 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d73ed75b4c4128e790ebd50d7f1ab0977dae93e0ef82580373c14c2d1ca8c84f" Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.556082 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.635904 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566548-bbghf"] Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.646453 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566548-bbghf"] Mar 20 07:54:06 crc kubenswrapper[5136]: I0320 07:54:06.413966 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" path="/var/lib/kubelet/pods/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66/volumes" Mar 20 07:54:15 crc kubenswrapper[5136]: I0320 07:54:15.821673 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:54:15 crc kubenswrapper[5136]: I0320 07:54:15.822319 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:54:35 crc kubenswrapper[5136]: I0320 07:54:35.082553 5136 scope.go:117] "RemoveContainer" containerID="eda4db7731b82a54ef6f8997e413d44c2ceb0549c49bbb5b7671591ccebd691e" Mar 20 07:54:45 crc kubenswrapper[5136]: I0320 07:54:45.821949 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:54:45 crc kubenswrapper[5136]: I0320 07:54:45.822456 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:55:15 crc kubenswrapper[5136]: I0320 07:55:15.821569 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:55:15 crc kubenswrapper[5136]: I0320 07:55:15.822876 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:55:15 crc kubenswrapper[5136]: I0320 07:55:15.822989 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:55:15 crc kubenswrapper[5136]: I0320 07:55:15.824084 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:55:15 crc kubenswrapper[5136]: I0320 07:55:15.824580 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" gracePeriod=600 Mar 20 07:55:15 crc kubenswrapper[5136]: E0320 07:55:15.959341 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:55:16 crc kubenswrapper[5136]: I0320 07:55:16.445210 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" exitCode=0 Mar 20 07:55:16 crc kubenswrapper[5136]: I0320 07:55:16.445274 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e"} Mar 20 07:55:16 crc kubenswrapper[5136]: I0320 07:55:16.445326 5136 scope.go:117] "RemoveContainer" containerID="722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55" Mar 20 07:55:16 crc kubenswrapper[5136]: I0320 07:55:16.446096 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:55:16 crc kubenswrapper[5136]: E0320 07:55:16.446366 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:55:29 crc kubenswrapper[5136]: I0320 07:55:29.398043 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:55:29 crc kubenswrapper[5136]: E0320 07:55:29.399178 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:55:42 crc kubenswrapper[5136]: I0320 07:55:42.396778 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:55:42 crc kubenswrapper[5136]: E0320 07:55:42.397528 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:55:54 crc kubenswrapper[5136]: I0320 07:55:54.396602 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:55:54 crc kubenswrapper[5136]: E0320 07:55:54.397452 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.153363 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566556-9vb6m"] Mar 20 07:56:00 crc kubenswrapper[5136]: E0320 07:56:00.154127 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14c729c-040e-40a8-90bd-6310cf18d489" containerName="oc" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.154147 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14c729c-040e-40a8-90bd-6310cf18d489" containerName="oc" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.154401 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14c729c-040e-40a8-90bd-6310cf18d489" containerName="oc" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.155040 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.166579 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566556-9vb6m"] Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.201734 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.201810 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.201759 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.202567 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2ng\" (UniqueName: \"kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng\") pod \"auto-csr-approver-29566556-9vb6m\" (UID: \"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9\") " pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.303795 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n2ng\" (UniqueName: \"kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng\") pod \"auto-csr-approver-29566556-9vb6m\" (UID: \"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9\") " pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.329628 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n2ng\" (UniqueName: \"kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng\") pod \"auto-csr-approver-29566556-9vb6m\" (UID: \"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9\") " pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.523734 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.951329 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566556-9vb6m"] Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.953532 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:56:01 crc kubenswrapper[5136]: I0320 07:56:01.844860 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" event={"ID":"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9","Type":"ContainerStarted","Data":"e0e6e27ddb12271dd3a1159302afff5c7ca7ef161d27a66de3658607b795fcc0"} Mar 20 07:56:02 crc kubenswrapper[5136]: I0320 07:56:02.855041 5136 generic.go:334] "Generic (PLEG): container finished" podID="a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" containerID="63ba059f75c6d4d3450d3ac5b012caecfb450fe5911e60bdfcbba855ebc6ef49" exitCode=0 Mar 20 07:56:02 crc kubenswrapper[5136]: I0320 07:56:02.855098 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" event={"ID":"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9","Type":"ContainerDied","Data":"63ba059f75c6d4d3450d3ac5b012caecfb450fe5911e60bdfcbba855ebc6ef49"} Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.253121 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.358233 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n2ng\" (UniqueName: \"kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng\") pod \"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9\" (UID: \"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9\") " Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.365770 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng" (OuterVolumeSpecName: "kube-api-access-4n2ng") pod "a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" (UID: "a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9"). InnerVolumeSpecName "kube-api-access-4n2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.459726 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n2ng\" (UniqueName: \"kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng\") on node \"crc\" DevicePath \"\"" Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.879486 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" event={"ID":"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9","Type":"ContainerDied","Data":"e0e6e27ddb12271dd3a1159302afff5c7ca7ef161d27a66de3658607b795fcc0"} Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.879533 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0e6e27ddb12271dd3a1159302afff5c7ca7ef161d27a66de3658607b795fcc0" Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.879564 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:05 crc kubenswrapper[5136]: I0320 07:56:05.320183 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566550-nxn8z"] Mar 20 07:56:05 crc kubenswrapper[5136]: I0320 07:56:05.328797 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566550-nxn8z"] Mar 20 07:56:06 crc kubenswrapper[5136]: I0320 07:56:06.407003 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e892786-304f-4449-8303-227a30b2af0c" path="/var/lib/kubelet/pods/9e892786-304f-4449-8303-227a30b2af0c/volumes" Mar 20 07:56:07 crc kubenswrapper[5136]: I0320 07:56:07.397412 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:56:07 crc kubenswrapper[5136]: E0320 07:56:07.397853 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:56:21 crc kubenswrapper[5136]: I0320 07:56:21.397102 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:56:21 crc kubenswrapper[5136]: E0320 07:56:21.397881 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:56:34 crc kubenswrapper[5136]: I0320 07:56:34.396681 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:56:34 crc kubenswrapper[5136]: E0320 07:56:34.399727 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:56:35 crc kubenswrapper[5136]: I0320 07:56:35.193481 5136 scope.go:117] "RemoveContainer" containerID="fbf51c0f85e48cadf70be318d12d8502f4a21ef24eddd694f9b07eebf9064ae5" Mar 20 07:56:48 crc kubenswrapper[5136]: I0320 07:56:48.401443 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:56:48 crc kubenswrapper[5136]: E0320 07:56:48.402385 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:56:59 crc kubenswrapper[5136]: I0320 07:56:59.396769 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:56:59 crc kubenswrapper[5136]: E0320 07:56:59.397390 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.163150 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:01 crc kubenswrapper[5136]: E0320 07:57:01.163641 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" containerName="oc" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.163662 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" containerName="oc" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.163913 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" containerName="oc" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.165464 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.180679 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.258829 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdh9\" (UniqueName: \"kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.258880 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.258916 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.359942 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngdh9\" (UniqueName: \"kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.360381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.360568 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.360929 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.360985 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.383566 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngdh9\" (UniqueName: \"kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.486139 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.913231 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:02 crc kubenswrapper[5136]: I0320 07:57:02.343901 5136 generic.go:334] "Generic (PLEG): container finished" podID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerID="255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb" exitCode=0 Mar 20 07:57:02 crc kubenswrapper[5136]: I0320 07:57:02.343970 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerDied","Data":"255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb"} Mar 20 07:57:02 crc kubenswrapper[5136]: I0320 07:57:02.344015 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerStarted","Data":"babf061ccf17144d55f5b246b2717171d77d7c7349e6bdae8fdd7bb7671665b9"} Mar 20 07:57:03 crc kubenswrapper[5136]: I0320 07:57:03.361653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerStarted","Data":"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe"} Mar 20 07:57:04 crc kubenswrapper[5136]: I0320 07:57:04.369673 5136 generic.go:334] "Generic (PLEG): container finished" podID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerID="2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe" exitCode=0 Mar 20 07:57:04 crc kubenswrapper[5136]: I0320 07:57:04.369725 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerDied","Data":"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe"} Mar 20 07:57:05 crc kubenswrapper[5136]: I0320 07:57:05.378239 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerStarted","Data":"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3"} Mar 20 07:57:05 crc kubenswrapper[5136]: I0320 07:57:05.397672 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9mh2" podStartSLOduration=1.893868595 podStartE2EDuration="4.397653119s" podCreationTimestamp="2026-03-20 07:57:01 +0000 UTC" firstStartedPulling="2026-03-20 07:57:02.345575428 +0000 UTC m=+4054.604886579" lastFinishedPulling="2026-03-20 07:57:04.849359952 +0000 UTC m=+4057.108671103" observedRunningTime="2026-03-20 07:57:05.396184634 +0000 UTC m=+4057.655495795" watchObservedRunningTime="2026-03-20 07:57:05.397653119 +0000 UTC m=+4057.656964270" Mar 20 07:57:11 crc kubenswrapper[5136]: I0320 07:57:11.486575 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:11 crc kubenswrapper[5136]: I0320 07:57:11.487073 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:12 crc kubenswrapper[5136]: I0320 07:57:12.396738 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:57:12 crc kubenswrapper[5136]: E0320 07:57:12.397195 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:57:12 crc kubenswrapper[5136]: I0320 07:57:12.540664 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9mh2" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="registry-server" probeResult="failure" output=< Mar 20 07:57:12 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:57:12 crc kubenswrapper[5136]: > Mar 20 07:57:21 crc kubenswrapper[5136]: I0320 07:57:21.544380 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:21 crc kubenswrapper[5136]: I0320 07:57:21.636806 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:21 crc kubenswrapper[5136]: I0320 07:57:21.795117 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:23 crc kubenswrapper[5136]: I0320 07:57:23.397184 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:57:23 crc kubenswrapper[5136]: E0320 07:57:23.398643 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:57:23 crc kubenswrapper[5136]: I0320 07:57:23.557040 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9mh2" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="registry-server" containerID="cri-o://a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3" gracePeriod=2 Mar 20 07:57:23 crc kubenswrapper[5136]: I0320 07:57:23.960223 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.100521 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities\") pod \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.101029 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content\") pod \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.101097 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngdh9\" (UniqueName: \"kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9\") pod \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.101580 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities" (OuterVolumeSpecName: "utilities") pod "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" (UID: "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.106299 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9" (OuterVolumeSpecName: "kube-api-access-ngdh9") pod "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" (UID: "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec"). InnerVolumeSpecName "kube-api-access-ngdh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.202508 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.202538 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngdh9\" (UniqueName: \"kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9\") on node \"crc\" DevicePath \"\"" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.252564 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" (UID: "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.303772 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.563632 5136 generic.go:334] "Generic (PLEG): container finished" podID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerID="a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3" exitCode=0 Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.563668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerDied","Data":"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3"} Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.563684 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.563701 5136 scope.go:117] "RemoveContainer" containerID="a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.563691 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerDied","Data":"babf061ccf17144d55f5b246b2717171d77d7c7349e6bdae8fdd7bb7671665b9"} Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.588244 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.594502 5136 scope.go:117] "RemoveContainer" containerID="2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.595331 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.628409 5136 scope.go:117] "RemoveContainer" containerID="255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.645870 5136 scope.go:117] "RemoveContainer" containerID="a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3" Mar 20 07:57:24 crc kubenswrapper[5136]: E0320 07:57:24.646325 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3\": container with ID starting with a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3 not found: ID does not exist" containerID="a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.646358 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3"} err="failed to get container status \"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3\": rpc error: code = NotFound desc = could not find container \"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3\": container with ID starting with a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3 not found: ID does not exist" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.646386 5136 scope.go:117] "RemoveContainer" containerID="2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe" Mar 20 07:57:24 crc kubenswrapper[5136]: E0320 07:57:24.646645 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe\": container with ID starting with 2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe not found: ID does not exist" containerID="2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.646671 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe"} err="failed to get container status \"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe\": rpc error: code = NotFound desc = could not find container \"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe\": container with ID starting with 2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe not found: ID does not exist" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.646689 5136 scope.go:117] "RemoveContainer" containerID="255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb" Mar 20 07:57:24 crc kubenswrapper[5136]: E0320 07:57:24.646923 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb\": container with ID starting with 255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb not found: ID does not exist" containerID="255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.646950 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb"} err="failed to get container status \"255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb\": rpc error: code = NotFound desc = could not find container \"255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb\": container with ID starting with 255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb not found: ID does not exist" Mar 20 07:57:26 crc kubenswrapper[5136]: I0320 07:57:26.405347 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" path="/var/lib/kubelet/pods/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec/volumes" Mar 20 07:57:34 crc kubenswrapper[5136]: I0320 07:57:34.397572 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:57:34 crc kubenswrapper[5136]: E0320 07:57:34.398466 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:57:49 crc kubenswrapper[5136]: I0320 07:57:49.396898 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:57:49 crc kubenswrapper[5136]: E0320 07:57:49.397564 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.161178 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566558-28xg4"] Mar 20 07:58:00 crc kubenswrapper[5136]: E0320 07:58:00.162469 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="extract-utilities" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.162490 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="extract-utilities" Mar 20 07:58:00 crc kubenswrapper[5136]: E0320 07:58:00.162514 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="extract-content" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.162523 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="extract-content" Mar 20 07:58:00 crc kubenswrapper[5136]: E0320 07:58:00.162557 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="registry-server" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.162565 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="registry-server" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.162763 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="registry-server" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.163496 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.167475 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.168079 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.168142 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.174469 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566558-28xg4"] Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.278462 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5466q\" (UniqueName: \"kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q\") pod \"auto-csr-approver-29566558-28xg4\" (UID: \"86a36a1a-3cb0-4827-94dc-d0f12aaf385f\") " pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.379601 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5466q\" (UniqueName: \"kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q\") pod \"auto-csr-approver-29566558-28xg4\" (UID: \"86a36a1a-3cb0-4827-94dc-d0f12aaf385f\") " pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.410925 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5466q\" (UniqueName: \"kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q\") pod \"auto-csr-approver-29566558-28xg4\" (UID: \"86a36a1a-3cb0-4827-94dc-d0f12aaf385f\") " pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.499668 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.727863 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566558-28xg4"] Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.831889 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566558-28xg4" event={"ID":"86a36a1a-3cb0-4827-94dc-d0f12aaf385f","Type":"ContainerStarted","Data":"efd0c500f15a2b8eca0de3b8c0c4864bdbb59bca070f0d8489e35dbb0b9291fa"} Mar 20 07:58:02 crc kubenswrapper[5136]: I0320 07:58:02.396773 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:58:02 crc kubenswrapper[5136]: E0320 07:58:02.397178 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:58:02 crc kubenswrapper[5136]: I0320 07:58:02.848551 5136 generic.go:334] "Generic (PLEG): container finished" podID="86a36a1a-3cb0-4827-94dc-d0f12aaf385f" containerID="10b2eeb474ede75a881a1e488088999be16ed59aa15136e1ec1d51ce1d945aec" exitCode=0 Mar 20 07:58:02 crc kubenswrapper[5136]: I0320 07:58:02.848668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566558-28xg4" event={"ID":"86a36a1a-3cb0-4827-94dc-d0f12aaf385f","Type":"ContainerDied","Data":"10b2eeb474ede75a881a1e488088999be16ed59aa15136e1ec1d51ce1d945aec"} Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.261442 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.445242 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5466q\" (UniqueName: \"kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q\") pod \"86a36a1a-3cb0-4827-94dc-d0f12aaf385f\" (UID: \"86a36a1a-3cb0-4827-94dc-d0f12aaf385f\") " Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.452151 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q" (OuterVolumeSpecName: "kube-api-access-5466q") pod "86a36a1a-3cb0-4827-94dc-d0f12aaf385f" (UID: "86a36a1a-3cb0-4827-94dc-d0f12aaf385f"). InnerVolumeSpecName "kube-api-access-5466q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.546996 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5466q\" (UniqueName: \"kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q\") on node \"crc\" DevicePath \"\"" Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.867404 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566558-28xg4" event={"ID":"86a36a1a-3cb0-4827-94dc-d0f12aaf385f","Type":"ContainerDied","Data":"efd0c500f15a2b8eca0de3b8c0c4864bdbb59bca070f0d8489e35dbb0b9291fa"} Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.867463 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efd0c500f15a2b8eca0de3b8c0c4864bdbb59bca070f0d8489e35dbb0b9291fa" Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.867470 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:05 crc kubenswrapper[5136]: I0320 07:58:05.338182 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566552-45w99"] Mar 20 07:58:05 crc kubenswrapper[5136]: I0320 07:58:05.343050 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566552-45w99"] Mar 20 07:58:06 crc kubenswrapper[5136]: I0320 07:58:06.410445 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="680d027e-ec7b-41fa-928c-826f0968c6f2" path="/var/lib/kubelet/pods/680d027e-ec7b-41fa-928c-826f0968c6f2/volumes" Mar 20 07:58:14 crc kubenswrapper[5136]: I0320 07:58:14.397072 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:58:14 crc kubenswrapper[5136]: E0320 07:58:14.398039 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:58:29 crc kubenswrapper[5136]: I0320 07:58:29.397108 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:58:29 crc kubenswrapper[5136]: E0320 07:58:29.397922 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:58:35 crc kubenswrapper[5136]: I0320 07:58:35.302273 5136 scope.go:117] "RemoveContainer" containerID="850f029af670145399cc93675607b8410dbf5d367cbba9e2397a2a62aff8327a" Mar 20 07:58:43 crc kubenswrapper[5136]: I0320 07:58:43.396387 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:58:43 crc kubenswrapper[5136]: E0320 07:58:43.397295 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:58:57 crc kubenswrapper[5136]: I0320 07:58:57.397518 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:58:57 crc kubenswrapper[5136]: E0320 07:58:57.398638 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:59:08 crc kubenswrapper[5136]: I0320 07:59:08.396930 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:59:08 crc kubenswrapper[5136]: E0320 07:59:08.398051 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:59:22 crc kubenswrapper[5136]: I0320 07:59:22.396963 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:59:22 crc kubenswrapper[5136]: E0320 07:59:22.398061 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:59:36 crc kubenswrapper[5136]: I0320 07:59:36.396308 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:59:36 crc kubenswrapper[5136]: E0320 07:59:36.397007 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:59:51 crc kubenswrapper[5136]: I0320 07:59:51.397090 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:59:51 crc kubenswrapper[5136]: E0320 07:59:51.398095 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.146766 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566560-p6kgg"] Mar 20 08:00:00 crc kubenswrapper[5136]: E0320 08:00:00.148939 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a36a1a-3cb0-4827-94dc-d0f12aaf385f" containerName="oc" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.149073 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a36a1a-3cb0-4827-94dc-d0f12aaf385f" containerName="oc" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.149340 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a36a1a-3cb0-4827-94dc-d0f12aaf385f" containerName="oc" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.150040 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.152051 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.152451 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.153481 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.173491 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl"] Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.174614 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.176190 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.176392 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.184556 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566560-p6kgg"] Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.188495 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl"] Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.253937 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp4fs\" (UniqueName: \"kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs\") pod \"auto-csr-approver-29566560-p6kgg\" (UID: \"dafdbb11-e22c-4545-8678-7757ef7e8605\") " pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.355662 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwsk9\" (UniqueName: \"kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.355731 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp4fs\" (UniqueName: \"kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs\") pod \"auto-csr-approver-29566560-p6kgg\" (UID: \"dafdbb11-e22c-4545-8678-7757ef7e8605\") " pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.355951 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.356051 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.377676 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp4fs\" (UniqueName: \"kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs\") pod \"auto-csr-approver-29566560-p6kgg\" (UID: \"dafdbb11-e22c-4545-8678-7757ef7e8605\") " pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.457462 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.457531 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.457588 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwsk9\" (UniqueName: \"kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.458803 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.462364 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.474040 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwsk9\" (UniqueName: \"kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.514045 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.522023 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.918128 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl"] Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.972031 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566560-p6kgg"] Mar 20 08:00:01 crc kubenswrapper[5136]: I0320 08:00:01.835074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" event={"ID":"dafdbb11-e22c-4545-8678-7757ef7e8605","Type":"ContainerStarted","Data":"835dcd63591d6d0b44066723750deb0b5ca26b1e8f791f70756132ab60105cdb"} Mar 20 08:00:01 crc kubenswrapper[5136]: I0320 08:00:01.837398 5136 generic.go:334] "Generic (PLEG): container finished" podID="90ad33e9-cb6b-450c-9703-8d6e379f3075" containerID="a9399ede282cd1d4b161abddeaa1193070be8003a67d2c8907749c2c5dadab78" exitCode=0 Mar 20 08:00:01 crc kubenswrapper[5136]: I0320 08:00:01.837503 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" event={"ID":"90ad33e9-cb6b-450c-9703-8d6e379f3075","Type":"ContainerDied","Data":"a9399ede282cd1d4b161abddeaa1193070be8003a67d2c8907749c2c5dadab78"} Mar 20 08:00:01 crc kubenswrapper[5136]: I0320 08:00:01.837589 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" event={"ID":"90ad33e9-cb6b-450c-9703-8d6e379f3075","Type":"ContainerStarted","Data":"f8e824a7bd5fbe483a1d56f307bdb3aaf7cb4a10c107a47ddbdc568bcad73fa5"} Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.138355 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.303842 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume\") pod \"90ad33e9-cb6b-450c-9703-8d6e379f3075\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.303887 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwsk9\" (UniqueName: \"kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9\") pod \"90ad33e9-cb6b-450c-9703-8d6e379f3075\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.303916 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume\") pod \"90ad33e9-cb6b-450c-9703-8d6e379f3075\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.304717 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume" (OuterVolumeSpecName: "config-volume") pod "90ad33e9-cb6b-450c-9703-8d6e379f3075" (UID: "90ad33e9-cb6b-450c-9703-8d6e379f3075"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.309474 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90ad33e9-cb6b-450c-9703-8d6e379f3075" (UID: "90ad33e9-cb6b-450c-9703-8d6e379f3075"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.311304 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9" (OuterVolumeSpecName: "kube-api-access-xwsk9") pod "90ad33e9-cb6b-450c-9703-8d6e379f3075" (UID: "90ad33e9-cb6b-450c-9703-8d6e379f3075"). InnerVolumeSpecName "kube-api-access-xwsk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.396693 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 08:00:03 crc kubenswrapper[5136]: E0320 08:00:03.396921 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.406431 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.406475 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwsk9\" (UniqueName: \"kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.406487 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.853463 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" event={"ID":"90ad33e9-cb6b-450c-9703-8d6e379f3075","Type":"ContainerDied","Data":"f8e824a7bd5fbe483a1d56f307bdb3aaf7cb4a10c107a47ddbdc568bcad73fa5"} Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.853510 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e824a7bd5fbe483a1d56f307bdb3aaf7cb4a10c107a47ddbdc568bcad73fa5" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.853541 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:04 crc kubenswrapper[5136]: I0320 08:00:04.224426 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q"] Mar 20 08:00:04 crc kubenswrapper[5136]: I0320 08:00:04.231249 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q"] Mar 20 08:00:04 crc kubenswrapper[5136]: I0320 08:00:04.407534 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f40568b-2bbc-4d1e-b089-6e08e1eede4b" path="/var/lib/kubelet/pods/6f40568b-2bbc-4d1e-b089-6e08e1eede4b/volumes" Mar 20 08:00:08 crc kubenswrapper[5136]: I0320 08:00:08.887711 5136 generic.go:334] "Generic (PLEG): container finished" podID="dafdbb11-e22c-4545-8678-7757ef7e8605" containerID="12a5c143763826b6ba302aa9399c3eae56ceb49f8dbb078183073cdf280ba6a4" exitCode=0 Mar 20 08:00:08 crc kubenswrapper[5136]: I0320 08:00:08.887771 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" event={"ID":"dafdbb11-e22c-4545-8678-7757ef7e8605","Type":"ContainerDied","Data":"12a5c143763826b6ba302aa9399c3eae56ceb49f8dbb078183073cdf280ba6a4"} Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.176305 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.299666 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp4fs\" (UniqueName: \"kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs\") pod \"dafdbb11-e22c-4545-8678-7757ef7e8605\" (UID: \"dafdbb11-e22c-4545-8678-7757ef7e8605\") " Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.305761 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs" (OuterVolumeSpecName: "kube-api-access-rp4fs") pod "dafdbb11-e22c-4545-8678-7757ef7e8605" (UID: "dafdbb11-e22c-4545-8678-7757ef7e8605"). InnerVolumeSpecName "kube-api-access-rp4fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.402084 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp4fs\" (UniqueName: \"kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.908579 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" event={"ID":"dafdbb11-e22c-4545-8678-7757ef7e8605","Type":"ContainerDied","Data":"835dcd63591d6d0b44066723750deb0b5ca26b1e8f791f70756132ab60105cdb"} Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.908636 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835dcd63591d6d0b44066723750deb0b5ca26b1e8f791f70756132ab60105cdb" Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.908792 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.046517 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:11 crc kubenswrapper[5136]: E0320 08:00:11.046961 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafdbb11-e22c-4545-8678-7757ef7e8605" containerName="oc" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.046983 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafdbb11-e22c-4545-8678-7757ef7e8605" containerName="oc" Mar 20 08:00:11 crc kubenswrapper[5136]: E0320 08:00:11.049112 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ad33e9-cb6b-450c-9703-8d6e379f3075" containerName="collect-profiles" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.049144 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ad33e9-cb6b-450c-9703-8d6e379f3075" containerName="collect-profiles" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.049422 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ad33e9-cb6b-450c-9703-8d6e379f3075" containerName="collect-profiles" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.049450 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafdbb11-e22c-4545-8678-7757ef7e8605" containerName="oc" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.050695 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.059989 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.212242 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.212318 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.212380 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4h8w\" (UniqueName: \"kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.238955 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566554-rhdhc"] Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.244199 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566554-rhdhc"] Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.313624 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4h8w\" (UniqueName: \"kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.313693 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.313741 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.314284 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.314336 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.330703 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4h8w\" (UniqueName: \"kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.371450 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:12 crc kubenswrapper[5136]: I0320 08:00:12.141540 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:12 crc kubenswrapper[5136]: I0320 08:00:12.404740 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14c729c-040e-40a8-90bd-6310cf18d489" path="/var/lib/kubelet/pods/b14c729c-040e-40a8-90bd-6310cf18d489/volumes" Mar 20 08:00:12 crc kubenswrapper[5136]: I0320 08:00:12.925460 5136 generic.go:334] "Generic (PLEG): container finished" podID="bb833208-918b-487d-925f-73b87fca3d3e" containerID="33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd" exitCode=0 Mar 20 08:00:12 crc kubenswrapper[5136]: I0320 08:00:12.925525 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerDied","Data":"33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd"} Mar 20 08:00:12 crc kubenswrapper[5136]: I0320 08:00:12.925565 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerStarted","Data":"bc724bad1e1e588917febe48ff97f1945a7640eb711953cf54a34630b4b5196b"} Mar 20 08:00:13 crc kubenswrapper[5136]: I0320 08:00:13.933203 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerStarted","Data":"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0"} Mar 20 08:00:14 crc kubenswrapper[5136]: I0320 08:00:14.942223 5136 generic.go:334] "Generic (PLEG): container finished" podID="bb833208-918b-487d-925f-73b87fca3d3e" containerID="9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0" exitCode=0 Mar 20 08:00:14 crc kubenswrapper[5136]: I0320 08:00:14.942311 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerDied","Data":"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0"} Mar 20 08:00:15 crc kubenswrapper[5136]: I0320 08:00:15.396626 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 08:00:15 crc kubenswrapper[5136]: E0320 08:00:15.397148 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:00:15 crc kubenswrapper[5136]: I0320 08:00:15.952399 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerStarted","Data":"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312"} Mar 20 08:00:15 crc kubenswrapper[5136]: I0320 08:00:15.981641 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p9x88" podStartSLOduration=2.586404253 podStartE2EDuration="4.981618637s" podCreationTimestamp="2026-03-20 08:00:11 +0000 UTC" firstStartedPulling="2026-03-20 08:00:12.927373154 +0000 UTC m=+4245.186684305" lastFinishedPulling="2026-03-20 08:00:15.322587538 +0000 UTC m=+4247.581898689" observedRunningTime="2026-03-20 08:00:15.974425075 +0000 UTC m=+4248.233736216" watchObservedRunningTime="2026-03-20 08:00:15.981618637 +0000 UTC m=+4248.240929808" Mar 20 08:00:21 crc kubenswrapper[5136]: I0320 08:00:21.372689 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:21 crc kubenswrapper[5136]: I0320 08:00:21.373437 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:21 crc kubenswrapper[5136]: I0320 08:00:21.445215 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:22 crc kubenswrapper[5136]: I0320 08:00:22.078145 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:22 crc kubenswrapper[5136]: I0320 08:00:22.436100 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.024982 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p9x88" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="registry-server" containerID="cri-o://c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312" gracePeriod=2 Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.494576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.606489 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content\") pod \"bb833208-918b-487d-925f-73b87fca3d3e\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.606552 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4h8w\" (UniqueName: \"kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w\") pod \"bb833208-918b-487d-925f-73b87fca3d3e\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.606697 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities\") pod \"bb833208-918b-487d-925f-73b87fca3d3e\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.608282 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities" (OuterVolumeSpecName: "utilities") pod "bb833208-918b-487d-925f-73b87fca3d3e" (UID: "bb833208-918b-487d-925f-73b87fca3d3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.612140 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w" (OuterVolumeSpecName: "kube-api-access-m4h8w") pod "bb833208-918b-487d-925f-73b87fca3d3e" (UID: "bb833208-918b-487d-925f-73b87fca3d3e"). InnerVolumeSpecName "kube-api-access-m4h8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.692297 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb833208-918b-487d-925f-73b87fca3d3e" (UID: "bb833208-918b-487d-925f-73b87fca3d3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.707936 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.707970 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4h8w\" (UniqueName: \"kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.707981 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.041334 5136 generic.go:334] "Generic (PLEG): container finished" podID="bb833208-918b-487d-925f-73b87fca3d3e" containerID="c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312" exitCode=0 Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.041434 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerDied","Data":"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312"} Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.043601 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerDied","Data":"bc724bad1e1e588917febe48ff97f1945a7640eb711953cf54a34630b4b5196b"} Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.043647 5136 scope.go:117] "RemoveContainer" containerID="c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.041557 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.088213 5136 scope.go:117] "RemoveContainer" containerID="9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.088802 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.099141 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.121647 5136 scope.go:117] "RemoveContainer" containerID="33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.154229 5136 scope.go:117] "RemoveContainer" containerID="c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312" Mar 20 08:00:25 crc kubenswrapper[5136]: E0320 08:00:25.154902 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312\": container with ID starting with c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312 not found: ID does not exist" containerID="c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.154951 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312"} err="failed to get container status \"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312\": rpc error: code = NotFound desc = could not find container \"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312\": container with ID starting with c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312 not found: ID does not exist" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.154976 5136 scope.go:117] "RemoveContainer" containerID="9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0" Mar 20 08:00:25 crc kubenswrapper[5136]: E0320 08:00:25.155408 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0\": container with ID starting with 9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0 not found: ID does not exist" containerID="9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.155588 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0"} err="failed to get container status \"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0\": rpc error: code = NotFound desc = could not find container \"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0\": container with ID starting with 9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0 not found: ID does not exist" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.155773 5136 scope.go:117] "RemoveContainer" containerID="33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd" Mar 20 08:00:25 crc kubenswrapper[5136]: E0320 08:00:25.156320 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd\": container with ID starting with 33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd not found: ID does not exist" containerID="33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.156377 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd"} err="failed to get container status \"33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd\": rpc error: code = NotFound desc = could not find container \"33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd\": container with ID starting with 33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd not found: ID does not exist" Mar 20 08:00:26 crc kubenswrapper[5136]: I0320 08:00:26.409807 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb833208-918b-487d-925f-73b87fca3d3e" path="/var/lib/kubelet/pods/bb833208-918b-487d-925f-73b87fca3d3e/volumes" Mar 20 08:00:27 crc kubenswrapper[5136]: I0320 08:00:27.397626 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 08:00:28 crc kubenswrapper[5136]: I0320 08:00:28.075884 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71"} Mar 20 08:00:35 crc kubenswrapper[5136]: I0320 08:00:35.408959 5136 scope.go:117] "RemoveContainer" containerID="2bdd42fe67ed84f38314c250e3cb1b39e3c5ab87ea5a8e75695c9bc28f0ae60d" Mar 20 08:00:35 crc kubenswrapper[5136]: I0320 08:00:35.460349 5136 scope.go:117] "RemoveContainer" containerID="9f24a13849a44546b978a1e086eb14881e8d529298f6ffe2023d8ef7f1bdc4c6" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.166863 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566562-r69l2"] Mar 20 08:02:00 crc kubenswrapper[5136]: E0320 08:02:00.170948 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="extract-utilities" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.170987 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="extract-utilities" Mar 20 08:02:00 crc kubenswrapper[5136]: E0320 08:02:00.171046 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="registry-server" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.171067 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="registry-server" Mar 20 08:02:00 crc kubenswrapper[5136]: E0320 08:02:00.171121 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="extract-content" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.171140 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="extract-content" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.171481 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="registry-server" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.172510 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.174727 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566562-r69l2"] Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.175413 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.175516 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.178311 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.333665 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pz6\" (UniqueName: \"kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6\") pod \"auto-csr-approver-29566562-r69l2\" (UID: \"f60a5f62-51f3-48fa-b718-e55da57c2647\") " pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.435643 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4pz6\" (UniqueName: \"kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6\") pod \"auto-csr-approver-29566562-r69l2\" (UID: \"f60a5f62-51f3-48fa-b718-e55da57c2647\") " pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.469207 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4pz6\" (UniqueName: \"kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6\") pod \"auto-csr-approver-29566562-r69l2\" (UID: \"f60a5f62-51f3-48fa-b718-e55da57c2647\") " pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.496691 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.943140 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566562-r69l2"] Mar 20 08:02:00 crc kubenswrapper[5136]: W0320 08:02:00.952006 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf60a5f62_51f3_48fa_b718_e55da57c2647.slice/crio-45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78 WatchSource:0}: Error finding container 45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78: Status 404 returned error can't find the container with id 45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78 Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.956323 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:02:01 crc kubenswrapper[5136]: I0320 08:02:01.921665 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566562-r69l2" event={"ID":"f60a5f62-51f3-48fa-b718-e55da57c2647","Type":"ContainerStarted","Data":"45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78"} Mar 20 08:02:02 crc kubenswrapper[5136]: I0320 08:02:02.929341 5136 generic.go:334] "Generic (PLEG): container finished" podID="f60a5f62-51f3-48fa-b718-e55da57c2647" containerID="58b402a854cc55b74a5a39d5f73121fda2fdd8d8cefb4c57e5aa94f8a9a79d4e" exitCode=0 Mar 20 08:02:02 crc kubenswrapper[5136]: I0320 08:02:02.929425 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566562-r69l2" event={"ID":"f60a5f62-51f3-48fa-b718-e55da57c2647","Type":"ContainerDied","Data":"58b402a854cc55b74a5a39d5f73121fda2fdd8d8cefb4c57e5aa94f8a9a79d4e"} Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.285251 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.401944 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4pz6\" (UniqueName: \"kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6\") pod \"f60a5f62-51f3-48fa-b718-e55da57c2647\" (UID: \"f60a5f62-51f3-48fa-b718-e55da57c2647\") " Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.412223 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6" (OuterVolumeSpecName: "kube-api-access-d4pz6") pod "f60a5f62-51f3-48fa-b718-e55da57c2647" (UID: "f60a5f62-51f3-48fa-b718-e55da57c2647"). InnerVolumeSpecName "kube-api-access-d4pz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.505711 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4pz6\" (UniqueName: \"kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6\") on node \"crc\" DevicePath \"\"" Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.952836 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566562-r69l2" event={"ID":"f60a5f62-51f3-48fa-b718-e55da57c2647","Type":"ContainerDied","Data":"45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78"} Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.952873 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78" Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.952946 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:05 crc kubenswrapper[5136]: I0320 08:02:05.364108 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566556-9vb6m"] Mar 20 08:02:05 crc kubenswrapper[5136]: I0320 08:02:05.372257 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566556-9vb6m"] Mar 20 08:02:06 crc kubenswrapper[5136]: I0320 08:02:06.413493 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" path="/var/lib/kubelet/pods/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9/volumes" Mar 20 08:02:35 crc kubenswrapper[5136]: I0320 08:02:35.559455 5136 scope.go:117] "RemoveContainer" containerID="63ba059f75c6d4d3450d3ac5b012caecfb450fe5911e60bdfcbba855ebc6ef49" Mar 20 08:02:45 crc kubenswrapper[5136]: I0320 08:02:45.822230 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:02:45 crc kubenswrapper[5136]: I0320 08:02:45.822942 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:03:15 crc kubenswrapper[5136]: I0320 08:03:15.822619 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:03:15 crc kubenswrapper[5136]: I0320 08:03:15.823551 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:03:45 crc kubenswrapper[5136]: I0320 08:03:45.822395 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:03:45 crc kubenswrapper[5136]: I0320 08:03:45.823187 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:03:45 crc kubenswrapper[5136]: I0320 08:03:45.823260 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:03:45 crc kubenswrapper[5136]: I0320 08:03:45.824349 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:03:45 crc kubenswrapper[5136]: I0320 08:03:45.824451 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71" gracePeriod=600 Mar 20 08:03:46 crc kubenswrapper[5136]: I0320 08:03:46.826385 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71" exitCode=0 Mar 20 08:03:46 crc kubenswrapper[5136]: I0320 08:03:46.826459 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71"} Mar 20 08:03:46 crc kubenswrapper[5136]: I0320 08:03:46.826969 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82"} Mar 20 08:03:46 crc kubenswrapper[5136]: I0320 08:03:46.826990 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 08:03:50 crc kubenswrapper[5136]: I0320 08:03:50.923674 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:03:50 crc kubenswrapper[5136]: E0320 08:03:50.924504 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60a5f62-51f3-48fa-b718-e55da57c2647" containerName="oc" Mar 20 08:03:50 crc kubenswrapper[5136]: I0320 08:03:50.924519 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60a5f62-51f3-48fa-b718-e55da57c2647" containerName="oc" Mar 20 08:03:50 crc kubenswrapper[5136]: I0320 08:03:50.924749 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60a5f62-51f3-48fa-b718-e55da57c2647" containerName="oc" Mar 20 08:03:50 crc kubenswrapper[5136]: I0320 08:03:50.925976 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:50 crc kubenswrapper[5136]: I0320 08:03:50.951629 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.009919 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.009959 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.010089 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc945\" (UniqueName: \"kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.111045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.111086 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.111120 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc945\" (UniqueName: \"kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.111698 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.111762 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.141677 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc945\" (UniqueName: \"kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.244793 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.522945 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.866884 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerID="99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04" exitCode=0 Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.866936 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerDied","Data":"99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04"} Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.867154 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerStarted","Data":"6216dc20188501420e067c5dab2af6a0f3f6dc51d1d86c18f9beca6c561c1c90"} Mar 20 08:03:53 crc kubenswrapper[5136]: I0320 08:03:53.884824 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerID="72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9" exitCode=0 Mar 20 08:03:53 crc kubenswrapper[5136]: I0320 08:03:53.884893 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerDied","Data":"72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9"} Mar 20 08:03:54 crc kubenswrapper[5136]: I0320 08:03:54.894738 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerStarted","Data":"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1"} Mar 20 08:03:54 crc kubenswrapper[5136]: I0320 08:03:54.920844 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9x9kk" podStartSLOduration=2.5020369159999998 podStartE2EDuration="4.920806018s" podCreationTimestamp="2026-03-20 08:03:50 +0000 UTC" firstStartedPulling="2026-03-20 08:03:51.868259518 +0000 UTC m=+4464.127570699" lastFinishedPulling="2026-03-20 08:03:54.28702862 +0000 UTC m=+4466.546339801" observedRunningTime="2026-03-20 08:03:54.912752439 +0000 UTC m=+4467.172063610" watchObservedRunningTime="2026-03-20 08:03:54.920806018 +0000 UTC m=+4467.180117169" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.040542 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.043692 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.059553 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.105419 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qdjt\" (UniqueName: \"kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.105559 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.105697 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.206974 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.207059 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.207146 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qdjt\" (UniqueName: \"kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.208631 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.208776 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.232745 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qdjt\" (UniqueName: \"kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.380746 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.841767 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.916086 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerStarted","Data":"410338ce57cd4886948cba2241cb257aebf7cd2afe0af4e84e8032565e4f8b0a"} Mar 20 08:03:58 crc kubenswrapper[5136]: I0320 08:03:58.927539 5136 generic.go:334] "Generic (PLEG): container finished" podID="7312d03e-31ae-4c8a-95f6-23325b107124" containerID="a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837" exitCode=0 Mar 20 08:03:58 crc kubenswrapper[5136]: I0320 08:03:58.927612 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerDied","Data":"a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837"} Mar 20 08:03:59 crc kubenswrapper[5136]: I0320 08:03:59.939804 5136 generic.go:334] "Generic (PLEG): container finished" podID="7312d03e-31ae-4c8a-95f6-23325b107124" containerID="761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787" exitCode=0 Mar 20 08:03:59 crc kubenswrapper[5136]: I0320 08:03:59.939938 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerDied","Data":"761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787"} Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.149259 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566564-wps8c"] Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.151159 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.156748 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.156936 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566564-wps8c"] Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.156987 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.157770 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.256038 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb46t\" (UniqueName: \"kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t\") pod \"auto-csr-approver-29566564-wps8c\" (UID: \"dd0441d6-4822-4c0e-b72c-b33d59e4a81b\") " pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.357441 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb46t\" (UniqueName: \"kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t\") pod \"auto-csr-approver-29566564-wps8c\" (UID: \"dd0441d6-4822-4c0e-b72c-b33d59e4a81b\") " pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.388262 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb46t\" (UniqueName: \"kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t\") pod \"auto-csr-approver-29566564-wps8c\" (UID: \"dd0441d6-4822-4c0e-b72c-b33d59e4a81b\") " pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.486875 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.949372 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerStarted","Data":"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578"} Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.978694 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vf4gp" podStartSLOduration=2.519842918 podStartE2EDuration="3.978675761s" podCreationTimestamp="2026-03-20 08:03:57 +0000 UTC" firstStartedPulling="2026-03-20 08:03:58.930234462 +0000 UTC m=+4471.189545653" lastFinishedPulling="2026-03-20 08:04:00.389067305 +0000 UTC m=+4472.648378496" observedRunningTime="2026-03-20 08:04:00.970670793 +0000 UTC m=+4473.229981954" watchObservedRunningTime="2026-03-20 08:04:00.978675761 +0000 UTC m=+4473.237986912" Mar 20 08:04:01 crc kubenswrapper[5136]: I0320 08:04:01.007935 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566564-wps8c"] Mar 20 08:04:01 crc kubenswrapper[5136]: I0320 08:04:01.245168 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:01 crc kubenswrapper[5136]: I0320 08:04:01.245227 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:01 crc kubenswrapper[5136]: I0320 08:04:01.298899 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:01 crc kubenswrapper[5136]: I0320 08:04:01.957520 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566564-wps8c" event={"ID":"dd0441d6-4822-4c0e-b72c-b33d59e4a81b","Type":"ContainerStarted","Data":"3a00424e61e7f40fc003f3dd7a302d720672be345dbaa81b236aed4563cf3e2d"} Mar 20 08:04:02 crc kubenswrapper[5136]: I0320 08:04:02.119080 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:02 crc kubenswrapper[5136]: I0320 08:04:02.967752 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd0441d6-4822-4c0e-b72c-b33d59e4a81b" containerID="a8db32019a3eb6483d295a328515205a1810920d8bfa5e500df3dffc05d44642" exitCode=0 Mar 20 08:04:02 crc kubenswrapper[5136]: I0320 08:04:02.967891 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566564-wps8c" event={"ID":"dd0441d6-4822-4c0e-b72c-b33d59e4a81b","Type":"ContainerDied","Data":"a8db32019a3eb6483d295a328515205a1810920d8bfa5e500df3dffc05d44642"} Mar 20 08:04:03 crc kubenswrapper[5136]: I0320 08:04:03.453380 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:04:03 crc kubenswrapper[5136]: I0320 08:04:03.990427 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9x9kk" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="registry-server" containerID="cri-o://a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1" gracePeriod=2 Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.312067 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.429579 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.440253 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb46t\" (UniqueName: \"kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t\") pod \"dd0441d6-4822-4c0e-b72c-b33d59e4a81b\" (UID: \"dd0441d6-4822-4c0e-b72c-b33d59e4a81b\") " Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.487647 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t" (OuterVolumeSpecName: "kube-api-access-hb46t") pod "dd0441d6-4822-4c0e-b72c-b33d59e4a81b" (UID: "dd0441d6-4822-4c0e-b72c-b33d59e4a81b"). InnerVolumeSpecName "kube-api-access-hb46t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.541465 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content\") pod \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.541544 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc945\" (UniqueName: \"kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945\") pod \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.541574 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities\") pod \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.541928 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb46t\" (UniqueName: \"kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.542910 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities" (OuterVolumeSpecName: "utilities") pod "c7cd59f8-3dfb-45b7-884b-eb0a7670011c" (UID: "c7cd59f8-3dfb-45b7-884b-eb0a7670011c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.545920 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945" (OuterVolumeSpecName: "kube-api-access-dc945") pod "c7cd59f8-3dfb-45b7-884b-eb0a7670011c" (UID: "c7cd59f8-3dfb-45b7-884b-eb0a7670011c"). InnerVolumeSpecName "kube-api-access-dc945". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.598161 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7cd59f8-3dfb-45b7-884b-eb0a7670011c" (UID: "c7cd59f8-3dfb-45b7-884b-eb0a7670011c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.642873 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc945\" (UniqueName: \"kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.642918 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.642931 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.999071 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerID="a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1" exitCode=0 Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.999125 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.999139 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerDied","Data":"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1"} Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:04.999193 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerDied","Data":"6216dc20188501420e067c5dab2af6a0f3f6dc51d1d86c18f9beca6c561c1c90"} Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:04.999221 5136 scope.go:117] "RemoveContainer" containerID="a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.003116 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566564-wps8c" event={"ID":"dd0441d6-4822-4c0e-b72c-b33d59e4a81b","Type":"ContainerDied","Data":"3a00424e61e7f40fc003f3dd7a302d720672be345dbaa81b236aed4563cf3e2d"} Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.003157 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a00424e61e7f40fc003f3dd7a302d720672be345dbaa81b236aed4563cf3e2d" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.003170 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.022711 5136 scope.go:117] "RemoveContainer" containerID="72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.048847 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.057611 5136 scope.go:117] "RemoveContainer" containerID="99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.059723 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.085743 5136 scope.go:117] "RemoveContainer" containerID="a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1" Mar 20 08:04:05 crc kubenswrapper[5136]: E0320 08:04:05.086247 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1\": container with ID starting with a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1 not found: ID does not exist" containerID="a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.086279 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1"} err="failed to get container status \"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1\": rpc error: code = NotFound desc = could not find container \"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1\": container with ID starting with a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1 not found: ID does not exist" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.086298 5136 scope.go:117] "RemoveContainer" containerID="72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9" Mar 20 08:04:05 crc kubenswrapper[5136]: E0320 08:04:05.086851 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9\": container with ID starting with 72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9 not found: ID does not exist" containerID="72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.086910 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9"} err="failed to get container status \"72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9\": rpc error: code = NotFound desc = could not find container \"72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9\": container with ID starting with 72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9 not found: ID does not exist" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.086949 5136 scope.go:117] "RemoveContainer" containerID="99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04" Mar 20 08:04:05 crc kubenswrapper[5136]: E0320 08:04:05.087241 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04\": container with ID starting with 99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04 not found: ID does not exist" containerID="99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.087312 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04"} err="failed to get container status \"99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04\": rpc error: code = NotFound desc = could not find container \"99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04\": container with ID starting with 99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04 not found: ID does not exist" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.386517 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566558-28xg4"] Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.391782 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566558-28xg4"] Mar 20 08:04:06 crc kubenswrapper[5136]: I0320 08:04:06.414761 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a36a1a-3cb0-4827-94dc-d0f12aaf385f" path="/var/lib/kubelet/pods/86a36a1a-3cb0-4827-94dc-d0f12aaf385f/volumes" Mar 20 08:04:06 crc kubenswrapper[5136]: I0320 08:04:06.415834 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" path="/var/lib/kubelet/pods/c7cd59f8-3dfb-45b7-884b-eb0a7670011c/volumes" Mar 20 08:04:07 crc kubenswrapper[5136]: I0320 08:04:07.381544 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:07 crc kubenswrapper[5136]: I0320 08:04:07.381956 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:07 crc kubenswrapper[5136]: I0320 08:04:07.428491 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:08 crc kubenswrapper[5136]: I0320 08:04:08.080770 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:08 crc kubenswrapper[5136]: I0320 08:04:08.854750 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.035912 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vf4gp" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="registry-server" containerID="cri-o://03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578" gracePeriod=2 Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.444538 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.520901 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content\") pod \"7312d03e-31ae-4c8a-95f6-23325b107124\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.521050 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qdjt\" (UniqueName: \"kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt\") pod \"7312d03e-31ae-4c8a-95f6-23325b107124\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.521156 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities\") pod \"7312d03e-31ae-4c8a-95f6-23325b107124\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.522497 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities" (OuterVolumeSpecName: "utilities") pod "7312d03e-31ae-4c8a-95f6-23325b107124" (UID: "7312d03e-31ae-4c8a-95f6-23325b107124"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.530952 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt" (OuterVolumeSpecName: "kube-api-access-9qdjt") pod "7312d03e-31ae-4c8a-95f6-23325b107124" (UID: "7312d03e-31ae-4c8a-95f6-23325b107124"). InnerVolumeSpecName "kube-api-access-9qdjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.557773 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7312d03e-31ae-4c8a-95f6-23325b107124" (UID: "7312d03e-31ae-4c8a-95f6-23325b107124"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.623152 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.623186 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.623197 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qdjt\" (UniqueName: \"kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.051230 5136 generic.go:334] "Generic (PLEG): container finished" podID="7312d03e-31ae-4c8a-95f6-23325b107124" containerID="03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578" exitCode=0 Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.051280 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerDied","Data":"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578"} Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.051313 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.051346 5136 scope.go:117] "RemoveContainer" containerID="03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.051332 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerDied","Data":"410338ce57cd4886948cba2241cb257aebf7cd2afe0af4e84e8032565e4f8b0a"} Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.086219 5136 scope.go:117] "RemoveContainer" containerID="761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.109005 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.117197 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.132798 5136 scope.go:117] "RemoveContainer" containerID="a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.169482 5136 scope.go:117] "RemoveContainer" containerID="03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578" Mar 20 08:04:11 crc kubenswrapper[5136]: E0320 08:04:11.170042 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578\": container with ID starting with 03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578 not found: ID does not exist" containerID="03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.170086 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578"} err="failed to get container status \"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578\": rpc error: code = NotFound desc = could not find container \"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578\": container with ID starting with 03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578 not found: ID does not exist" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.170133 5136 scope.go:117] "RemoveContainer" containerID="761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787" Mar 20 08:04:11 crc kubenswrapper[5136]: E0320 08:04:11.170572 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787\": container with ID starting with 761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787 not found: ID does not exist" containerID="761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.170610 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787"} err="failed to get container status \"761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787\": rpc error: code = NotFound desc = could not find container \"761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787\": container with ID starting with 761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787 not found: ID does not exist" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.170624 5136 scope.go:117] "RemoveContainer" containerID="a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837" Mar 20 08:04:11 crc kubenswrapper[5136]: E0320 08:04:11.170952 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837\": container with ID starting with a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837 not found: ID does not exist" containerID="a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.171007 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837"} err="failed to get container status \"a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837\": rpc error: code = NotFound desc = could not find container \"a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837\": container with ID starting with a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837 not found: ID does not exist" Mar 20 08:04:12 crc kubenswrapper[5136]: I0320 08:04:12.411252 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" path="/var/lib/kubelet/pods/7312d03e-31ae-4c8a-95f6-23325b107124/volumes" Mar 20 08:04:35 crc kubenswrapper[5136]: I0320 08:04:35.766076 5136 scope.go:117] "RemoveContainer" containerID="10b2eeb474ede75a881a1e488088999be16ed59aa15136e1ec1d51ce1d945aec" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.180415 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566566-tj6mv"] Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.181713 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="extract-utilities" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.181745 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="extract-utilities" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.181778 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.181796 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.181881 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0441d6-4822-4c0e-b72c-b33d59e4a81b" containerName="oc" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.181902 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0441d6-4822-4c0e-b72c-b33d59e4a81b" containerName="oc" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.181923 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="extract-utilities" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.181941 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="extract-utilities" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.181972 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="extract-content" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.181988 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="extract-content" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.182031 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="extract-content" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.182047 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="extract-content" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.182073 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.182092 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.182457 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.182494 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0441d6-4822-4c0e-b72c-b33d59e4a81b" containerName="oc" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.182527 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.183572 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.191554 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.192288 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.192667 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.224695 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566566-tj6mv"] Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.251529 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbbj\" (UniqueName: \"kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj\") pod \"auto-csr-approver-29566566-tj6mv\" (UID: \"1171863e-bf58-4961-a881-403e291cc93a\") " pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.353323 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbbj\" (UniqueName: \"kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj\") pod \"auto-csr-approver-29566566-tj6mv\" (UID: \"1171863e-bf58-4961-a881-403e291cc93a\") " pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.381230 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbbj\" (UniqueName: \"kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj\") pod \"auto-csr-approver-29566566-tj6mv\" (UID: \"1171863e-bf58-4961-a881-403e291cc93a\") " pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.536797 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.959152 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566566-tj6mv"] Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.984013 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" event={"ID":"1171863e-bf58-4961-a881-403e291cc93a","Type":"ContainerStarted","Data":"bba6e3b4e06639d22811ed7b4a4ba11c4a1f53e2c440a6933e344c68966fb961"} Mar 20 08:06:03 crc kubenswrapper[5136]: I0320 08:06:03.000931 5136 generic.go:334] "Generic (PLEG): container finished" podID="1171863e-bf58-4961-a881-403e291cc93a" containerID="b1deef80cd1c3d1582469b2cb38e1b1a394ed2e7b6171fb2539451da0bf3a162" exitCode=0 Mar 20 08:06:03 crc kubenswrapper[5136]: I0320 08:06:03.000991 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" event={"ID":"1171863e-bf58-4961-a881-403e291cc93a","Type":"ContainerDied","Data":"b1deef80cd1c3d1582469b2cb38e1b1a394ed2e7b6171fb2539451da0bf3a162"} Mar 20 08:06:04 crc kubenswrapper[5136]: I0320 08:06:04.347212 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:04 crc kubenswrapper[5136]: I0320 08:06:04.409161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbbj\" (UniqueName: \"kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj\") pod \"1171863e-bf58-4961-a881-403e291cc93a\" (UID: \"1171863e-bf58-4961-a881-403e291cc93a\") " Mar 20 08:06:04 crc kubenswrapper[5136]: I0320 08:06:04.415408 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj" (OuterVolumeSpecName: "kube-api-access-fdbbj") pod "1171863e-bf58-4961-a881-403e291cc93a" (UID: "1171863e-bf58-4961-a881-403e291cc93a"). InnerVolumeSpecName "kube-api-access-fdbbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:06:04 crc kubenswrapper[5136]: I0320 08:06:04.513782 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbbj\" (UniqueName: \"kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj\") on node \"crc\" DevicePath \"\"" Mar 20 08:06:05 crc kubenswrapper[5136]: I0320 08:06:05.029113 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" event={"ID":"1171863e-bf58-4961-a881-403e291cc93a","Type":"ContainerDied","Data":"bba6e3b4e06639d22811ed7b4a4ba11c4a1f53e2c440a6933e344c68966fb961"} Mar 20 08:06:05 crc kubenswrapper[5136]: I0320 08:06:05.029165 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba6e3b4e06639d22811ed7b4a4ba11c4a1f53e2c440a6933e344c68966fb961" Mar 20 08:06:05 crc kubenswrapper[5136]: I0320 08:06:05.029214 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:05 crc kubenswrapper[5136]: I0320 08:06:05.432915 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566560-p6kgg"] Mar 20 08:06:05 crc kubenswrapper[5136]: I0320 08:06:05.437405 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566560-p6kgg"] Mar 20 08:06:06 crc kubenswrapper[5136]: I0320 08:06:06.407162 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dafdbb11-e22c-4545-8678-7757ef7e8605" path="/var/lib/kubelet/pods/dafdbb11-e22c-4545-8678-7757ef7e8605/volumes" Mar 20 08:06:15 crc kubenswrapper[5136]: I0320 08:06:15.822598 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:06:15 crc kubenswrapper[5136]: I0320 08:06:15.823096 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:06:35 crc kubenswrapper[5136]: I0320 08:06:35.905537 5136 scope.go:117] "RemoveContainer" containerID="12a5c143763826b6ba302aa9399c3eae56ceb49f8dbb078183073cdf280ba6a4" Mar 20 08:06:45 crc kubenswrapper[5136]: I0320 08:06:45.822680 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:06:45 crc kubenswrapper[5136]: I0320 08:06:45.823287 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:07:15 crc kubenswrapper[5136]: I0320 08:07:15.822477 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:07:15 crc kubenswrapper[5136]: I0320 08:07:15.823149 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:07:15 crc kubenswrapper[5136]: I0320 08:07:15.823207 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:07:15 crc kubenswrapper[5136]: I0320 08:07:15.823990 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:07:15 crc kubenswrapper[5136]: I0320 08:07:15.824071 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" gracePeriod=600 Mar 20 08:07:15 crc kubenswrapper[5136]: E0320 08:07:15.951555 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:07:16 crc kubenswrapper[5136]: I0320 08:07:16.605268 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" exitCode=0 Mar 20 08:07:16 crc kubenswrapper[5136]: I0320 08:07:16.605336 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82"} Mar 20 08:07:16 crc kubenswrapper[5136]: I0320 08:07:16.605387 5136 scope.go:117] "RemoveContainer" containerID="5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71" Mar 20 08:07:16 crc kubenswrapper[5136]: I0320 08:07:16.606210 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:07:16 crc kubenswrapper[5136]: E0320 08:07:16.606846 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:07:30 crc kubenswrapper[5136]: I0320 08:07:30.397322 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:07:30 crc kubenswrapper[5136]: E0320 08:07:30.398384 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:07:43 crc kubenswrapper[5136]: I0320 08:07:43.397160 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:07:43 crc kubenswrapper[5136]: E0320 08:07:43.398164 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.305218 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:07:51 crc kubenswrapper[5136]: E0320 08:07:51.306370 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1171863e-bf58-4961-a881-403e291cc93a" containerName="oc" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.306394 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1171863e-bf58-4961-a881-403e291cc93a" containerName="oc" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.306660 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1171863e-bf58-4961-a881-403e291cc93a" containerName="oc" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.309696 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.322899 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.402593 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjfjm\" (UniqueName: \"kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.402650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.402712 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.504240 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjfjm\" (UniqueName: \"kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.504293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.504358 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.505091 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.505116 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.529449 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjfjm\" (UniqueName: \"kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.633501 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:52 crc kubenswrapper[5136]: I0320 08:07:52.070723 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:07:52 crc kubenswrapper[5136]: I0320 08:07:52.895559 5136 generic.go:334] "Generic (PLEG): container finished" podID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerID="6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5" exitCode=0 Mar 20 08:07:52 crc kubenswrapper[5136]: I0320 08:07:52.895706 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerDied","Data":"6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5"} Mar 20 08:07:52 crc kubenswrapper[5136]: I0320 08:07:52.895868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerStarted","Data":"81a9f974105c9ce0fad94de5e00134979d625fe19fad5a54c8ea767113a113e2"} Mar 20 08:07:52 crc kubenswrapper[5136]: I0320 08:07:52.897342 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:07:53 crc kubenswrapper[5136]: I0320 08:07:53.912245 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerStarted","Data":"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e"} Mar 20 08:07:54 crc kubenswrapper[5136]: I0320 08:07:54.397717 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:07:54 crc kubenswrapper[5136]: E0320 08:07:54.398469 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:07:54 crc kubenswrapper[5136]: I0320 08:07:54.924495 5136 generic.go:334] "Generic (PLEG): container finished" podID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerID="efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e" exitCode=0 Mar 20 08:07:54 crc kubenswrapper[5136]: I0320 08:07:54.924566 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerDied","Data":"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e"} Mar 20 08:07:55 crc kubenswrapper[5136]: I0320 08:07:55.936786 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerStarted","Data":"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906"} Mar 20 08:07:55 crc kubenswrapper[5136]: I0320 08:07:55.966503 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4hwz2" podStartSLOduration=2.523671052 podStartE2EDuration="4.966477956s" podCreationTimestamp="2026-03-20 08:07:51 +0000 UTC" firstStartedPulling="2026-03-20 08:07:52.897084986 +0000 UTC m=+4705.156396137" lastFinishedPulling="2026-03-20 08:07:55.33989188 +0000 UTC m=+4707.599203041" observedRunningTime="2026-03-20 08:07:55.959321395 +0000 UTC m=+4708.218632546" watchObservedRunningTime="2026-03-20 08:07:55.966477956 +0000 UTC m=+4708.225789127" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.142151 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566568-t2xwb"] Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.143491 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.146735 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.146998 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.147214 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.162443 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566568-t2xwb"] Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.335112 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lvhc\" (UniqueName: \"kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc\") pod \"auto-csr-approver-29566568-t2xwb\" (UID: \"b174d612-6f70-49f1-a024-93c2a9bd0824\") " pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.436503 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lvhc\" (UniqueName: \"kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc\") pod \"auto-csr-approver-29566568-t2xwb\" (UID: \"b174d612-6f70-49f1-a024-93c2a9bd0824\") " pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.469134 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lvhc\" (UniqueName: \"kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc\") pod \"auto-csr-approver-29566568-t2xwb\" (UID: \"b174d612-6f70-49f1-a024-93c2a9bd0824\") " pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.763965 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:01 crc kubenswrapper[5136]: I0320 08:08:01.015126 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566568-t2xwb"] Mar 20 08:08:01 crc kubenswrapper[5136]: I0320 08:08:01.634234 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:01 crc kubenswrapper[5136]: I0320 08:08:01.634302 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:01 crc kubenswrapper[5136]: I0320 08:08:01.989114 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" event={"ID":"b174d612-6f70-49f1-a024-93c2a9bd0824","Type":"ContainerStarted","Data":"6841d8670809f2ed51d755167f8c9ddff5654a631dfbe640e35c1fbc6a14d522"} Mar 20 08:08:02 crc kubenswrapper[5136]: I0320 08:08:02.686226 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4hwz2" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="registry-server" probeResult="failure" output=< Mar 20 08:08:02 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:08:02 crc kubenswrapper[5136]: > Mar 20 08:08:03 crc kubenswrapper[5136]: I0320 08:08:03.001976 5136 generic.go:334] "Generic (PLEG): container finished" podID="b174d612-6f70-49f1-a024-93c2a9bd0824" containerID="cfc59d82836f1e5aa8be6bb29641caa9e94e4841e523822550b31308b0957aae" exitCode=0 Mar 20 08:08:03 crc kubenswrapper[5136]: I0320 08:08:03.002081 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" event={"ID":"b174d612-6f70-49f1-a024-93c2a9bd0824","Type":"ContainerDied","Data":"cfc59d82836f1e5aa8be6bb29641caa9e94e4841e523822550b31308b0957aae"} Mar 20 08:08:04 crc kubenswrapper[5136]: I0320 08:08:04.326983 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:04 crc kubenswrapper[5136]: I0320 08:08:04.495260 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lvhc\" (UniqueName: \"kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc\") pod \"b174d612-6f70-49f1-a024-93c2a9bd0824\" (UID: \"b174d612-6f70-49f1-a024-93c2a9bd0824\") " Mar 20 08:08:04 crc kubenswrapper[5136]: I0320 08:08:04.501449 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc" (OuterVolumeSpecName: "kube-api-access-7lvhc") pod "b174d612-6f70-49f1-a024-93c2a9bd0824" (UID: "b174d612-6f70-49f1-a024-93c2a9bd0824"). InnerVolumeSpecName "kube-api-access-7lvhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:08:04 crc kubenswrapper[5136]: I0320 08:08:04.597673 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lvhc\" (UniqueName: \"kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc\") on node \"crc\" DevicePath \"\"" Mar 20 08:08:05 crc kubenswrapper[5136]: I0320 08:08:05.020138 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" event={"ID":"b174d612-6f70-49f1-a024-93c2a9bd0824","Type":"ContainerDied","Data":"6841d8670809f2ed51d755167f8c9ddff5654a631dfbe640e35c1fbc6a14d522"} Mar 20 08:08:05 crc kubenswrapper[5136]: I0320 08:08:05.020572 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6841d8670809f2ed51d755167f8c9ddff5654a631dfbe640e35c1fbc6a14d522" Mar 20 08:08:05 crc kubenswrapper[5136]: I0320 08:08:05.020175 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:05 crc kubenswrapper[5136]: I0320 08:08:05.408401 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566562-r69l2"] Mar 20 08:08:05 crc kubenswrapper[5136]: I0320 08:08:05.417763 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566562-r69l2"] Mar 20 08:08:06 crc kubenswrapper[5136]: I0320 08:08:06.411172 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60a5f62-51f3-48fa-b718-e55da57c2647" path="/var/lib/kubelet/pods/f60a5f62-51f3-48fa-b718-e55da57c2647/volumes" Mar 20 08:08:09 crc kubenswrapper[5136]: I0320 08:08:09.397234 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:08:09 crc kubenswrapper[5136]: E0320 08:08:09.398139 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:08:11 crc kubenswrapper[5136]: I0320 08:08:11.698526 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:11 crc kubenswrapper[5136]: I0320 08:08:11.748544 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:11 crc kubenswrapper[5136]: I0320 08:08:11.953402 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.075672 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4hwz2" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="registry-server" containerID="cri-o://d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906" gracePeriod=2 Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.752712 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.941764 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities\") pod \"35c277d6-a1a4-484a-bf8a-cb58210afedd\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.941947 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content\") pod \"35c277d6-a1a4-484a-bf8a-cb58210afedd\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.941984 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjfjm\" (UniqueName: \"kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm\") pod \"35c277d6-a1a4-484a-bf8a-cb58210afedd\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.944431 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities" (OuterVolumeSpecName: "utilities") pod "35c277d6-a1a4-484a-bf8a-cb58210afedd" (UID: "35c277d6-a1a4-484a-bf8a-cb58210afedd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.956198 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm" (OuterVolumeSpecName: "kube-api-access-fjfjm") pod "35c277d6-a1a4-484a-bf8a-cb58210afedd" (UID: "35c277d6-a1a4-484a-bf8a-cb58210afedd"). InnerVolumeSpecName "kube-api-access-fjfjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.044110 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjfjm\" (UniqueName: \"kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm\") on node \"crc\" DevicePath \"\"" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.044151 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.086503 5136 generic.go:334] "Generic (PLEG): container finished" podID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerID="d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906" exitCode=0 Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.086561 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.086602 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerDied","Data":"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906"} Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.086665 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerDied","Data":"81a9f974105c9ce0fad94de5e00134979d625fe19fad5a54c8ea767113a113e2"} Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.086694 5136 scope.go:117] "RemoveContainer" containerID="d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.111660 5136 scope.go:117] "RemoveContainer" containerID="efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.138690 5136 scope.go:117] "RemoveContainer" containerID="6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.142857 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35c277d6-a1a4-484a-bf8a-cb58210afedd" (UID: "35c277d6-a1a4-484a-bf8a-cb58210afedd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.145337 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.164775 5136 scope.go:117] "RemoveContainer" containerID="d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906" Mar 20 08:08:14 crc kubenswrapper[5136]: E0320 08:08:14.165237 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906\": container with ID starting with d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906 not found: ID does not exist" containerID="d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.165269 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906"} err="failed to get container status \"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906\": rpc error: code = NotFound desc = could not find container \"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906\": container with ID starting with d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906 not found: ID does not exist" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.165291 5136 scope.go:117] "RemoveContainer" containerID="efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e" Mar 20 08:08:14 crc kubenswrapper[5136]: E0320 08:08:14.165548 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e\": container with ID starting with efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e not found: ID does not exist" containerID="efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.165570 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e"} err="failed to get container status \"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e\": rpc error: code = NotFound desc = could not find container \"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e\": container with ID starting with efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e not found: ID does not exist" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.165584 5136 scope.go:117] "RemoveContainer" containerID="6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5" Mar 20 08:08:14 crc kubenswrapper[5136]: E0320 08:08:14.165921 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5\": container with ID starting with 6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5 not found: ID does not exist" containerID="6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.165985 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5"} err="failed to get container status \"6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5\": rpc error: code = NotFound desc = could not find container \"6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5\": container with ID starting with 6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5 not found: ID does not exist" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.435410 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.445719 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:08:16 crc kubenswrapper[5136]: I0320 08:08:16.409535 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" path="/var/lib/kubelet/pods/35c277d6-a1a4-484a-bf8a-cb58210afedd/volumes" Mar 20 08:08:24 crc kubenswrapper[5136]: I0320 08:08:24.396208 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:08:24 crc kubenswrapper[5136]: E0320 08:08:24.396767 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:08:36 crc kubenswrapper[5136]: I0320 08:08:36.004453 5136 scope.go:117] "RemoveContainer" containerID="58b402a854cc55b74a5a39d5f73121fda2fdd8d8cefb4c57e5aa94f8a9a79d4e" Mar 20 08:08:39 crc kubenswrapper[5136]: I0320 08:08:39.396625 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:08:39 crc kubenswrapper[5136]: E0320 08:08:39.397495 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:08:53 crc kubenswrapper[5136]: I0320 08:08:53.396243 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:08:53 crc kubenswrapper[5136]: E0320 08:08:53.396982 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:09:06 crc kubenswrapper[5136]: I0320 08:09:06.397634 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:09:06 crc kubenswrapper[5136]: E0320 08:09:06.398876 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:09:17 crc kubenswrapper[5136]: I0320 08:09:17.396313 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:09:17 crc kubenswrapper[5136]: E0320 08:09:17.397210 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:09:31 crc kubenswrapper[5136]: I0320 08:09:31.396623 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:09:31 crc kubenswrapper[5136]: E0320 08:09:31.397360 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:09:45 crc kubenswrapper[5136]: I0320 08:09:45.397618 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:09:45 crc kubenswrapper[5136]: E0320 08:09:45.398654 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:09:56 crc kubenswrapper[5136]: I0320 08:09:56.397310 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:09:56 crc kubenswrapper[5136]: E0320 08:09:56.398416 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.199647 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566570-hgkrr"] Mar 20 08:10:00 crc kubenswrapper[5136]: E0320 08:10:00.201216 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="extract-content" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.201243 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="extract-content" Mar 20 08:10:00 crc kubenswrapper[5136]: E0320 08:10:00.201260 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b174d612-6f70-49f1-a024-93c2a9bd0824" containerName="oc" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.201273 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b174d612-6f70-49f1-a024-93c2a9bd0824" containerName="oc" Mar 20 08:10:00 crc kubenswrapper[5136]: E0320 08:10:00.201294 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="registry-server" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.201308 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="registry-server" Mar 20 08:10:00 crc kubenswrapper[5136]: E0320 08:10:00.201347 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="extract-utilities" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.201360 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="extract-utilities" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.204057 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b174d612-6f70-49f1-a024-93c2a9bd0824" containerName="oc" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.204107 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="registry-server" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.204754 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.207476 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.209374 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.209763 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.230465 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566570-hgkrr"] Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.255554 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5kv\" (UniqueName: \"kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv\") pod \"auto-csr-approver-29566570-hgkrr\" (UID: \"04302f0d-411c-49b0-8682-e64bb02c697d\") " pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.356860 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5kv\" (UniqueName: \"kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv\") pod \"auto-csr-approver-29566570-hgkrr\" (UID: \"04302f0d-411c-49b0-8682-e64bb02c697d\") " pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.389666 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5kv\" (UniqueName: \"kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv\") pod \"auto-csr-approver-29566570-hgkrr\" (UID: \"04302f0d-411c-49b0-8682-e64bb02c697d\") " pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.533156 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:01 crc kubenswrapper[5136]: I0320 08:10:01.034779 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566570-hgkrr"] Mar 20 08:10:01 crc kubenswrapper[5136]: I0320 08:10:01.976418 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" event={"ID":"04302f0d-411c-49b0-8682-e64bb02c697d","Type":"ContainerStarted","Data":"f96bfa114c776de7a58b86b2866b3939409e0f7882cc08dd6b4d8966fd6a33dd"} Mar 20 08:10:02 crc kubenswrapper[5136]: I0320 08:10:02.985474 5136 generic.go:334] "Generic (PLEG): container finished" podID="04302f0d-411c-49b0-8682-e64bb02c697d" containerID="79d8ba8cc4cde24163fe3c378a9767a3b425536e543bf53f50b55aaf7f5ba019" exitCode=0 Mar 20 08:10:02 crc kubenswrapper[5136]: I0320 08:10:02.985539 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" event={"ID":"04302f0d-411c-49b0-8682-e64bb02c697d","Type":"ContainerDied","Data":"79d8ba8cc4cde24163fe3c378a9767a3b425536e543bf53f50b55aaf7f5ba019"} Mar 20 08:10:04 crc kubenswrapper[5136]: I0320 08:10:04.328777 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:04 crc kubenswrapper[5136]: I0320 08:10:04.411710 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp5kv\" (UniqueName: \"kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv\") pod \"04302f0d-411c-49b0-8682-e64bb02c697d\" (UID: \"04302f0d-411c-49b0-8682-e64bb02c697d\") " Mar 20 08:10:04 crc kubenswrapper[5136]: I0320 08:10:04.416403 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv" (OuterVolumeSpecName: "kube-api-access-pp5kv") pod "04302f0d-411c-49b0-8682-e64bb02c697d" (UID: "04302f0d-411c-49b0-8682-e64bb02c697d"). InnerVolumeSpecName "kube-api-access-pp5kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:10:04 crc kubenswrapper[5136]: I0320 08:10:04.512926 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp5kv\" (UniqueName: \"kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv\") on node \"crc\" DevicePath \"\"" Mar 20 08:10:05 crc kubenswrapper[5136]: I0320 08:10:05.008225 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" event={"ID":"04302f0d-411c-49b0-8682-e64bb02c697d","Type":"ContainerDied","Data":"f96bfa114c776de7a58b86b2866b3939409e0f7882cc08dd6b4d8966fd6a33dd"} Mar 20 08:10:05 crc kubenswrapper[5136]: I0320 08:10:05.008274 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f96bfa114c776de7a58b86b2866b3939409e0f7882cc08dd6b4d8966fd6a33dd" Mar 20 08:10:05 crc kubenswrapper[5136]: I0320 08:10:05.008294 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:05 crc kubenswrapper[5136]: I0320 08:10:05.392353 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566564-wps8c"] Mar 20 08:10:05 crc kubenswrapper[5136]: I0320 08:10:05.397320 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566564-wps8c"] Mar 20 08:10:06 crc kubenswrapper[5136]: I0320 08:10:06.407467 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0441d6-4822-4c0e-b72c-b33d59e4a81b" path="/var/lib/kubelet/pods/dd0441d6-4822-4c0e-b72c-b33d59e4a81b/volumes" Mar 20 08:10:11 crc kubenswrapper[5136]: I0320 08:10:11.397302 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:10:11 crc kubenswrapper[5136]: E0320 08:10:11.398144 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:10:25 crc kubenswrapper[5136]: I0320 08:10:25.397258 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:10:25 crc kubenswrapper[5136]: E0320 08:10:25.398005 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:10:36 crc kubenswrapper[5136]: I0320 08:10:36.138670 5136 scope.go:117] "RemoveContainer" containerID="a8db32019a3eb6483d295a328515205a1810920d8bfa5e500df3dffc05d44642" Mar 20 08:10:38 crc kubenswrapper[5136]: I0320 08:10:38.404353 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:10:38 crc kubenswrapper[5136]: E0320 08:10:38.405154 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:10:52 crc kubenswrapper[5136]: I0320 08:10:52.397347 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:10:52 crc kubenswrapper[5136]: E0320 08:10:52.398409 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.455507 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:10:56 crc kubenswrapper[5136]: E0320 08:10:56.457192 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04302f0d-411c-49b0-8682-e64bb02c697d" containerName="oc" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.457226 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="04302f0d-411c-49b0-8682-e64bb02c697d" containerName="oc" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.457490 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="04302f0d-411c-49b0-8682-e64bb02c697d" containerName="oc" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.458806 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.462139 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.514887 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.515013 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.515054 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grr85\" (UniqueName: \"kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.616522 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.616582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.616602 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grr85\" (UniqueName: \"kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.617136 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.617162 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.644287 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grr85\" (UniqueName: \"kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.775736 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:57 crc kubenswrapper[5136]: I0320 08:10:57.237742 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:10:57 crc kubenswrapper[5136]: I0320 08:10:57.425396 5136 generic.go:334] "Generic (PLEG): container finished" podID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerID="270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf" exitCode=0 Mar 20 08:10:57 crc kubenswrapper[5136]: I0320 08:10:57.425445 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerDied","Data":"270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf"} Mar 20 08:10:57 crc kubenswrapper[5136]: I0320 08:10:57.425495 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerStarted","Data":"90bdd832502596574851d98ed6ffbdebe2040ef137ca2eebb9103a5a610cac71"} Mar 20 08:10:58 crc kubenswrapper[5136]: I0320 08:10:58.438368 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerStarted","Data":"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233"} Mar 20 08:10:59 crc kubenswrapper[5136]: I0320 08:10:59.451601 5136 generic.go:334] "Generic (PLEG): container finished" podID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerID="701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233" exitCode=0 Mar 20 08:10:59 crc kubenswrapper[5136]: I0320 08:10:59.451668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerDied","Data":"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233"} Mar 20 08:11:00 crc kubenswrapper[5136]: I0320 08:11:00.460360 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerStarted","Data":"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0"} Mar 20 08:11:00 crc kubenswrapper[5136]: I0320 08:11:00.480626 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rmw4k" podStartSLOduration=2.038028789 podStartE2EDuration="4.480606063s" podCreationTimestamp="2026-03-20 08:10:56 +0000 UTC" firstStartedPulling="2026-03-20 08:10:57.426681555 +0000 UTC m=+4889.685992706" lastFinishedPulling="2026-03-20 08:10:59.869258789 +0000 UTC m=+4892.128569980" observedRunningTime="2026-03-20 08:11:00.480245503 +0000 UTC m=+4892.739556664" watchObservedRunningTime="2026-03-20 08:11:00.480606063 +0000 UTC m=+4892.739917214" Mar 20 08:11:05 crc kubenswrapper[5136]: I0320 08:11:05.396276 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:11:05 crc kubenswrapper[5136]: E0320 08:11:05.396749 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:11:06 crc kubenswrapper[5136]: I0320 08:11:06.777407 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:06 crc kubenswrapper[5136]: I0320 08:11:06.778085 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:06 crc kubenswrapper[5136]: I0320 08:11:06.852610 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:07 crc kubenswrapper[5136]: I0320 08:11:07.564555 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:07 crc kubenswrapper[5136]: I0320 08:11:07.619664 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:11:09 crc kubenswrapper[5136]: I0320 08:11:09.537276 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rmw4k" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="registry-server" containerID="cri-o://47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0" gracePeriod=2 Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.371019 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.525162 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content\") pod \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.525536 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grr85\" (UniqueName: \"kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85\") pod \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.525904 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities\") pod \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.527360 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities" (OuterVolumeSpecName: "utilities") pod "ba6e1a2e-96ff-4c0b-b86a-9c948d147361" (UID: "ba6e1a2e-96ff-4c0b-b86a-9c948d147361"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.532936 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85" (OuterVolumeSpecName: "kube-api-access-grr85") pod "ba6e1a2e-96ff-4c0b-b86a-9c948d147361" (UID: "ba6e1a2e-96ff-4c0b-b86a-9c948d147361"). InnerVolumeSpecName "kube-api-access-grr85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.546023 5136 generic.go:334] "Generic (PLEG): container finished" podID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerID="47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0" exitCode=0 Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.546068 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerDied","Data":"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0"} Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.546097 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerDied","Data":"90bdd832502596574851d98ed6ffbdebe2040ef137ca2eebb9103a5a610cac71"} Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.546118 5136 scope.go:117] "RemoveContainer" containerID="47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.546253 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.580472 5136 scope.go:117] "RemoveContainer" containerID="701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.588061 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba6e1a2e-96ff-4c0b-b86a-9c948d147361" (UID: "ba6e1a2e-96ff-4c0b-b86a-9c948d147361"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.599399 5136 scope.go:117] "RemoveContainer" containerID="270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.627345 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.627374 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grr85\" (UniqueName: \"kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85\") on node \"crc\" DevicePath \"\"" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.627387 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.633690 5136 scope.go:117] "RemoveContainer" containerID="47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0" Mar 20 08:11:10 crc kubenswrapper[5136]: E0320 08:11:10.634149 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0\": container with ID starting with 47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0 not found: ID does not exist" containerID="47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.634187 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0"} err="failed to get container status \"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0\": rpc error: code = NotFound desc = could not find container \"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0\": container with ID starting with 47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0 not found: ID does not exist" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.634210 5136 scope.go:117] "RemoveContainer" containerID="701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233" Mar 20 08:11:10 crc kubenswrapper[5136]: E0320 08:11:10.634538 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233\": container with ID starting with 701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233 not found: ID does not exist" containerID="701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.634585 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233"} err="failed to get container status \"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233\": rpc error: code = NotFound desc = could not find container \"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233\": container with ID starting with 701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233 not found: ID does not exist" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.634617 5136 scope.go:117] "RemoveContainer" containerID="270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf" Mar 20 08:11:10 crc kubenswrapper[5136]: E0320 08:11:10.634961 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf\": container with ID starting with 270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf not found: ID does not exist" containerID="270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.634990 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf"} err="failed to get container status \"270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf\": rpc error: code = NotFound desc = could not find container \"270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf\": container with ID starting with 270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf not found: ID does not exist" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.880010 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.888336 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:11:12 crc kubenswrapper[5136]: I0320 08:11:12.407406 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" path="/var/lib/kubelet/pods/ba6e1a2e-96ff-4c0b-b86a-9c948d147361/volumes" Mar 20 08:11:20 crc kubenswrapper[5136]: I0320 08:11:20.397103 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:11:20 crc kubenswrapper[5136]: E0320 08:11:20.397673 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:11:35 crc kubenswrapper[5136]: I0320 08:11:35.396866 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:11:35 crc kubenswrapper[5136]: E0320 08:11:35.398003 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:11:49 crc kubenswrapper[5136]: I0320 08:11:49.396786 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:11:49 crc kubenswrapper[5136]: E0320 08:11:49.397493 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.172976 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566572-59rnc"] Mar 20 08:12:00 crc kubenswrapper[5136]: E0320 08:12:00.174338 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="extract-content" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.174373 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="extract-content" Mar 20 08:12:00 crc kubenswrapper[5136]: E0320 08:12:00.174441 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="registry-server" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.174461 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="registry-server" Mar 20 08:12:00 crc kubenswrapper[5136]: E0320 08:12:00.174514 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="extract-utilities" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.174534 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="extract-utilities" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.174915 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="registry-server" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.175985 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.181224 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.181346 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.182168 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.184235 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566572-59rnc"] Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.309861 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmc28\" (UniqueName: \"kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28\") pod \"auto-csr-approver-29566572-59rnc\" (UID: \"e251183d-ffbf-414f-9d88-5830637722be\") " pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.411618 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmc28\" (UniqueName: \"kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28\") pod \"auto-csr-approver-29566572-59rnc\" (UID: \"e251183d-ffbf-414f-9d88-5830637722be\") " pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.449039 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmc28\" (UniqueName: \"kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28\") pod \"auto-csr-approver-29566572-59rnc\" (UID: \"e251183d-ffbf-414f-9d88-5830637722be\") " pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.505875 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.932668 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566572-59rnc"] Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.972500 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566572-59rnc" event={"ID":"e251183d-ffbf-414f-9d88-5830637722be","Type":"ContainerStarted","Data":"ac444ccd6467e4f8466b4aea1cc2a5776eaaba3c507a635df062455df3f23bf0"} Mar 20 08:12:02 crc kubenswrapper[5136]: I0320 08:12:02.993980 5136 generic.go:334] "Generic (PLEG): container finished" podID="e251183d-ffbf-414f-9d88-5830637722be" containerID="b9689c90ff59dd42fc4279d62977b62b1e34234f2f912a119c0aaec47a889e16" exitCode=0 Mar 20 08:12:02 crc kubenswrapper[5136]: I0320 08:12:02.994034 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566572-59rnc" event={"ID":"e251183d-ffbf-414f-9d88-5830637722be","Type":"ContainerDied","Data":"b9689c90ff59dd42fc4279d62977b62b1e34234f2f912a119c0aaec47a889e16"} Mar 20 08:12:03 crc kubenswrapper[5136]: I0320 08:12:03.397491 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:12:03 crc kubenswrapper[5136]: E0320 08:12:03.397998 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:12:04 crc kubenswrapper[5136]: I0320 08:12:04.308325 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:04 crc kubenswrapper[5136]: I0320 08:12:04.476007 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmc28\" (UniqueName: \"kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28\") pod \"e251183d-ffbf-414f-9d88-5830637722be\" (UID: \"e251183d-ffbf-414f-9d88-5830637722be\") " Mar 20 08:12:04 crc kubenswrapper[5136]: I0320 08:12:04.489716 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28" (OuterVolumeSpecName: "kube-api-access-hmc28") pod "e251183d-ffbf-414f-9d88-5830637722be" (UID: "e251183d-ffbf-414f-9d88-5830637722be"). InnerVolumeSpecName "kube-api-access-hmc28". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:12:04 crc kubenswrapper[5136]: I0320 08:12:04.577236 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmc28\" (UniqueName: \"kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28\") on node \"crc\" DevicePath \"\"" Mar 20 08:12:05 crc kubenswrapper[5136]: I0320 08:12:05.022351 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566572-59rnc" event={"ID":"e251183d-ffbf-414f-9d88-5830637722be","Type":"ContainerDied","Data":"ac444ccd6467e4f8466b4aea1cc2a5776eaaba3c507a635df062455df3f23bf0"} Mar 20 08:12:05 crc kubenswrapper[5136]: I0320 08:12:05.022410 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac444ccd6467e4f8466b4aea1cc2a5776eaaba3c507a635df062455df3f23bf0" Mar 20 08:12:05 crc kubenswrapper[5136]: I0320 08:12:05.022453 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:05 crc kubenswrapper[5136]: I0320 08:12:05.399505 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566566-tj6mv"] Mar 20 08:12:05 crc kubenswrapper[5136]: I0320 08:12:05.411352 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566566-tj6mv"] Mar 20 08:12:06 crc kubenswrapper[5136]: I0320 08:12:06.414279 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1171863e-bf58-4961-a881-403e291cc93a" path="/var/lib/kubelet/pods/1171863e-bf58-4961-a881-403e291cc93a/volumes" Mar 20 08:12:15 crc kubenswrapper[5136]: I0320 08:12:15.404081 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:12:15 crc kubenswrapper[5136]: E0320 08:12:15.405096 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:12:29 crc kubenswrapper[5136]: I0320 08:12:29.396651 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:12:30 crc kubenswrapper[5136]: I0320 08:12:30.251690 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14"} Mar 20 08:12:36 crc kubenswrapper[5136]: I0320 08:12:36.272090 5136 scope.go:117] "RemoveContainer" containerID="b1deef80cd1c3d1582469b2cb38e1b1a394ed2e7b6171fb2539451da0bf3a162" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.175727 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566574-kgksp"] Mar 20 08:14:00 crc kubenswrapper[5136]: E0320 08:14:00.176937 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e251183d-ffbf-414f-9d88-5830637722be" containerName="oc" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.176959 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e251183d-ffbf-414f-9d88-5830637722be" containerName="oc" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.177162 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e251183d-ffbf-414f-9d88-5830637722be" containerName="oc" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.177803 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.184261 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.184321 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.184422 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.189836 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566574-kgksp"] Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.268266 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x2nm\" (UniqueName: \"kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm\") pod \"auto-csr-approver-29566574-kgksp\" (UID: \"aa57e02b-5eb6-401e-997d-a451c285486e\") " pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.370042 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x2nm\" (UniqueName: \"kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm\") pod \"auto-csr-approver-29566574-kgksp\" (UID: \"aa57e02b-5eb6-401e-997d-a451c285486e\") " pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.402918 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x2nm\" (UniqueName: \"kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm\") pod \"auto-csr-approver-29566574-kgksp\" (UID: \"aa57e02b-5eb6-401e-997d-a451c285486e\") " pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.512222 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.754227 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566574-kgksp"] Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.766838 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.963166 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566574-kgksp" event={"ID":"aa57e02b-5eb6-401e-997d-a451c285486e","Type":"ContainerStarted","Data":"a16e247caaaa1e1b4ac37bc80fe9cb32bc37aaad6451716e9c3b0896807d8606"} Mar 20 08:14:01 crc kubenswrapper[5136]: I0320 08:14:01.971184 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566574-kgksp" event={"ID":"aa57e02b-5eb6-401e-997d-a451c285486e","Type":"ContainerStarted","Data":"f86c6f9b4ef14a2e3961efc76d0057737a2a18ff9c282b67f83d05bca07f26b4"} Mar 20 08:14:01 crc kubenswrapper[5136]: I0320 08:14:01.989834 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566574-kgksp" podStartSLOduration=1.115872327 podStartE2EDuration="1.98979554s" podCreationTimestamp="2026-03-20 08:14:00 +0000 UTC" firstStartedPulling="2026-03-20 08:14:00.766552882 +0000 UTC m=+5073.025864033" lastFinishedPulling="2026-03-20 08:14:01.640476095 +0000 UTC m=+5073.899787246" observedRunningTime="2026-03-20 08:14:01.984862338 +0000 UTC m=+5074.244173509" watchObservedRunningTime="2026-03-20 08:14:01.98979554 +0000 UTC m=+5074.249106711" Mar 20 08:14:02 crc kubenswrapper[5136]: I0320 08:14:02.980891 5136 generic.go:334] "Generic (PLEG): container finished" podID="aa57e02b-5eb6-401e-997d-a451c285486e" containerID="f86c6f9b4ef14a2e3961efc76d0057737a2a18ff9c282b67f83d05bca07f26b4" exitCode=0 Mar 20 08:14:02 crc kubenswrapper[5136]: I0320 08:14:02.980954 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566574-kgksp" event={"ID":"aa57e02b-5eb6-401e-997d-a451c285486e","Type":"ContainerDied","Data":"f86c6f9b4ef14a2e3961efc76d0057737a2a18ff9c282b67f83d05bca07f26b4"} Mar 20 08:14:04 crc kubenswrapper[5136]: I0320 08:14:04.362446 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:04 crc kubenswrapper[5136]: I0320 08:14:04.550799 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x2nm\" (UniqueName: \"kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm\") pod \"aa57e02b-5eb6-401e-997d-a451c285486e\" (UID: \"aa57e02b-5eb6-401e-997d-a451c285486e\") " Mar 20 08:14:04 crc kubenswrapper[5136]: I0320 08:14:04.556805 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm" (OuterVolumeSpecName: "kube-api-access-7x2nm") pod "aa57e02b-5eb6-401e-997d-a451c285486e" (UID: "aa57e02b-5eb6-401e-997d-a451c285486e"). InnerVolumeSpecName "kube-api-access-7x2nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:14:04 crc kubenswrapper[5136]: I0320 08:14:04.652558 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x2nm\" (UniqueName: \"kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:05 crc kubenswrapper[5136]: I0320 08:14:05.006084 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566574-kgksp" event={"ID":"aa57e02b-5eb6-401e-997d-a451c285486e","Type":"ContainerDied","Data":"a16e247caaaa1e1b4ac37bc80fe9cb32bc37aaad6451716e9c3b0896807d8606"} Mar 20 08:14:05 crc kubenswrapper[5136]: I0320 08:14:05.006136 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16e247caaaa1e1b4ac37bc80fe9cb32bc37aaad6451716e9c3b0896807d8606" Mar 20 08:14:05 crc kubenswrapper[5136]: I0320 08:14:05.006202 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:05 crc kubenswrapper[5136]: I0320 08:14:05.052985 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566568-t2xwb"] Mar 20 08:14:05 crc kubenswrapper[5136]: I0320 08:14:05.059409 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566568-t2xwb"] Mar 20 08:14:06 crc kubenswrapper[5136]: I0320 08:14:06.409452 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b174d612-6f70-49f1-a024-93c2a9bd0824" path="/var/lib/kubelet/pods/b174d612-6f70-49f1-a024-93c2a9bd0824/volumes" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.832564 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:12 crc kubenswrapper[5136]: E0320 08:14:12.833246 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa57e02b-5eb6-401e-997d-a451c285486e" containerName="oc" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.833257 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa57e02b-5eb6-401e-997d-a451c285486e" containerName="oc" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.833409 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa57e02b-5eb6-401e-997d-a451c285486e" containerName="oc" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.835129 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.849313 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.976840 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.977184 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.977339 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgrrl\" (UniqueName: \"kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.078525 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.078590 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.078626 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgrrl\" (UniqueName: \"kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.079216 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.079373 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.095550 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgrrl\" (UniqueName: \"kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.172685 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.692616 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:14 crc kubenswrapper[5136]: I0320 08:14:14.083272 5136 generic.go:334] "Generic (PLEG): container finished" podID="5c4048be-e11d-4237-a81a-abf158f5769c" containerID="d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066" exitCode=0 Mar 20 08:14:14 crc kubenswrapper[5136]: I0320 08:14:14.083344 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerDied","Data":"d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066"} Mar 20 08:14:14 crc kubenswrapper[5136]: I0320 08:14:14.083599 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerStarted","Data":"9e02d4de71743bfbef48bca2789ad9c6ca17dee5f5ffd60b25fb33f56e0796df"} Mar 20 08:14:15 crc kubenswrapper[5136]: I0320 08:14:15.093897 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerStarted","Data":"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee"} Mar 20 08:14:16 crc kubenswrapper[5136]: I0320 08:14:16.107605 5136 generic.go:334] "Generic (PLEG): container finished" podID="5c4048be-e11d-4237-a81a-abf158f5769c" containerID="bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee" exitCode=0 Mar 20 08:14:16 crc kubenswrapper[5136]: I0320 08:14:16.107670 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerDied","Data":"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee"} Mar 20 08:14:17 crc kubenswrapper[5136]: I0320 08:14:17.118148 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerStarted","Data":"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5"} Mar 20 08:14:17 crc kubenswrapper[5136]: I0320 08:14:17.164017 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x2f65" podStartSLOduration=2.507148705 podStartE2EDuration="5.164000348s" podCreationTimestamp="2026-03-20 08:14:12 +0000 UTC" firstStartedPulling="2026-03-20 08:14:14.08533617 +0000 UTC m=+5086.344647321" lastFinishedPulling="2026-03-20 08:14:16.742187783 +0000 UTC m=+5089.001498964" observedRunningTime="2026-03-20 08:14:17.162174621 +0000 UTC m=+5089.421485772" watchObservedRunningTime="2026-03-20 08:14:17.164000348 +0000 UTC m=+5089.423311499" Mar 20 08:14:23 crc kubenswrapper[5136]: I0320 08:14:23.174225 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:23 crc kubenswrapper[5136]: I0320 08:14:23.174280 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:23 crc kubenswrapper[5136]: I0320 08:14:23.236608 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:24 crc kubenswrapper[5136]: I0320 08:14:24.223592 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:24 crc kubenswrapper[5136]: I0320 08:14:24.285747 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.190405 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x2f65" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="registry-server" containerID="cri-o://cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5" gracePeriod=2 Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.626957 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.689774 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities\") pod \"5c4048be-e11d-4237-a81a-abf158f5769c\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.690428 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgrrl\" (UniqueName: \"kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl\") pod \"5c4048be-e11d-4237-a81a-abf158f5769c\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.690541 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content\") pod \"5c4048be-e11d-4237-a81a-abf158f5769c\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.693144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities" (OuterVolumeSpecName: "utilities") pod "5c4048be-e11d-4237-a81a-abf158f5769c" (UID: "5c4048be-e11d-4237-a81a-abf158f5769c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.704133 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl" (OuterVolumeSpecName: "kube-api-access-lgrrl") pod "5c4048be-e11d-4237-a81a-abf158f5769c" (UID: "5c4048be-e11d-4237-a81a-abf158f5769c"). InnerVolumeSpecName "kube-api-access-lgrrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.774319 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c4048be-e11d-4237-a81a-abf158f5769c" (UID: "5c4048be-e11d-4237-a81a-abf158f5769c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.792195 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.792231 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgrrl\" (UniqueName: \"kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.792242 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.199880 5136 generic.go:334] "Generic (PLEG): container finished" podID="5c4048be-e11d-4237-a81a-abf158f5769c" containerID="cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5" exitCode=0 Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.199937 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerDied","Data":"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5"} Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.200001 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerDied","Data":"9e02d4de71743bfbef48bca2789ad9c6ca17dee5f5ffd60b25fb33f56e0796df"} Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.200023 5136 scope.go:117] "RemoveContainer" containerID="cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.199954 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.218040 5136 scope.go:117] "RemoveContainer" containerID="bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.236491 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.241781 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.271488 5136 scope.go:117] "RemoveContainer" containerID="d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.288748 5136 scope.go:117] "RemoveContainer" containerID="cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5" Mar 20 08:14:27 crc kubenswrapper[5136]: E0320 08:14:27.289174 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5\": container with ID starting with cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5 not found: ID does not exist" containerID="cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.289212 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5"} err="failed to get container status \"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5\": rpc error: code = NotFound desc = could not find container \"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5\": container with ID starting with cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5 not found: ID does not exist" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.289231 5136 scope.go:117] "RemoveContainer" containerID="bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee" Mar 20 08:14:27 crc kubenswrapper[5136]: E0320 08:14:27.289500 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee\": container with ID starting with bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee not found: ID does not exist" containerID="bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.289521 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee"} err="failed to get container status \"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee\": rpc error: code = NotFound desc = could not find container \"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee\": container with ID starting with bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee not found: ID does not exist" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.289534 5136 scope.go:117] "RemoveContainer" containerID="d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066" Mar 20 08:14:27 crc kubenswrapper[5136]: E0320 08:14:27.289736 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066\": container with ID starting with d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066 not found: ID does not exist" containerID="d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.289751 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066"} err="failed to get container status \"d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066\": rpc error: code = NotFound desc = could not find container \"d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066\": container with ID starting with d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066 not found: ID does not exist" Mar 20 08:14:28 crc kubenswrapper[5136]: I0320 08:14:28.411630 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" path="/var/lib/kubelet/pods/5c4048be-e11d-4237-a81a-abf158f5769c/volumes" Mar 20 08:14:36 crc kubenswrapper[5136]: I0320 08:14:36.395151 5136 scope.go:117] "RemoveContainer" containerID="cfc59d82836f1e5aa8be6bb29641caa9e94e4841e523822550b31308b0957aae" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.748089 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:43 crc kubenswrapper[5136]: E0320 08:14:43.748994 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="extract-utilities" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.749009 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="extract-utilities" Mar 20 08:14:43 crc kubenswrapper[5136]: E0320 08:14:43.749035 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="extract-content" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.749044 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="extract-content" Mar 20 08:14:43 crc kubenswrapper[5136]: E0320 08:14:43.749056 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="registry-server" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.749066 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="registry-server" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.749225 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="registry-server" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.750360 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.766165 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.851448 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4l45\" (UniqueName: \"kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.851732 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.851833 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.953040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4l45\" (UniqueName: \"kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.953155 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.953193 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.953629 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.953667 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.983061 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4l45\" (UniqueName: \"kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:44 crc kubenswrapper[5136]: I0320 08:14:44.078191 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:44 crc kubenswrapper[5136]: I0320 08:14:44.485059 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:44 crc kubenswrapper[5136]: W0320 08:14:44.491561 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c24947f_946c_46e2_b9f5_0ec67f66a8fa.slice/crio-824c727394022780fe78bc03117fbbf21f94848a2fad49540b8e049f0ac9e77e WatchSource:0}: Error finding container 824c727394022780fe78bc03117fbbf21f94848a2fad49540b8e049f0ac9e77e: Status 404 returned error can't find the container with id 824c727394022780fe78bc03117fbbf21f94848a2fad49540b8e049f0ac9e77e Mar 20 08:14:45 crc kubenswrapper[5136]: I0320 08:14:45.348732 5136 generic.go:334] "Generic (PLEG): container finished" podID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerID="84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093" exitCode=0 Mar 20 08:14:45 crc kubenswrapper[5136]: I0320 08:14:45.348877 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerDied","Data":"84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093"} Mar 20 08:14:45 crc kubenswrapper[5136]: I0320 08:14:45.349231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerStarted","Data":"824c727394022780fe78bc03117fbbf21f94848a2fad49540b8e049f0ac9e77e"} Mar 20 08:14:45 crc kubenswrapper[5136]: I0320 08:14:45.822208 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:14:45 crc kubenswrapper[5136]: I0320 08:14:45.822293 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:14:46 crc kubenswrapper[5136]: I0320 08:14:46.357993 5136 generic.go:334] "Generic (PLEG): container finished" podID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerID="af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee" exitCode=0 Mar 20 08:14:46 crc kubenswrapper[5136]: I0320 08:14:46.358043 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerDied","Data":"af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee"} Mar 20 08:14:47 crc kubenswrapper[5136]: I0320 08:14:47.368030 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerStarted","Data":"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed"} Mar 20 08:14:47 crc kubenswrapper[5136]: I0320 08:14:47.390427 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nmr9m" podStartSLOduration=2.922353106 podStartE2EDuration="4.390405954s" podCreationTimestamp="2026-03-20 08:14:43 +0000 UTC" firstStartedPulling="2026-03-20 08:14:45.350582222 +0000 UTC m=+5117.609893413" lastFinishedPulling="2026-03-20 08:14:46.8186351 +0000 UTC m=+5119.077946261" observedRunningTime="2026-03-20 08:14:47.389795775 +0000 UTC m=+5119.649106976" watchObservedRunningTime="2026-03-20 08:14:47.390405954 +0000 UTC m=+5119.649717125" Mar 20 08:14:54 crc kubenswrapper[5136]: I0320 08:14:54.078646 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:54 crc kubenswrapper[5136]: I0320 08:14:54.079075 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:54 crc kubenswrapper[5136]: I0320 08:14:54.132174 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:54 crc kubenswrapper[5136]: I0320 08:14:54.518580 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:54 crc kubenswrapper[5136]: I0320 08:14:54.570708 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:56 crc kubenswrapper[5136]: I0320 08:14:56.433581 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nmr9m" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="registry-server" containerID="cri-o://ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed" gracePeriod=2 Mar 20 08:14:56 crc kubenswrapper[5136]: I0320 08:14:56.938230 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.067471 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content\") pod \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.067555 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities\") pod \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.067635 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4l45\" (UniqueName: \"kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45\") pod \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.071053 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities" (OuterVolumeSpecName: "utilities") pod "7c24947f-946c-46e2-b9f5-0ec67f66a8fa" (UID: "7c24947f-946c-46e2-b9f5-0ec67f66a8fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.076219 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45" (OuterVolumeSpecName: "kube-api-access-d4l45") pod "7c24947f-946c-46e2-b9f5-0ec67f66a8fa" (UID: "7c24947f-946c-46e2-b9f5-0ec67f66a8fa"). InnerVolumeSpecName "kube-api-access-d4l45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.091444 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c24947f-946c-46e2-b9f5-0ec67f66a8fa" (UID: "7c24947f-946c-46e2-b9f5-0ec67f66a8fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.170091 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4l45\" (UniqueName: \"kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.170132 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.170146 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.448541 5136 generic.go:334] "Generic (PLEG): container finished" podID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerID="ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed" exitCode=0 Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.448664 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.448623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerDied","Data":"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed"} Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.448925 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerDied","Data":"824c727394022780fe78bc03117fbbf21f94848a2fad49540b8e049f0ac9e77e"} Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.448970 5136 scope.go:117] "RemoveContainer" containerID="ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.474997 5136 scope.go:117] "RemoveContainer" containerID="af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.505878 5136 scope.go:117] "RemoveContainer" containerID="84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.508746 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.516106 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.542656 5136 scope.go:117] "RemoveContainer" containerID="ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed" Mar 20 08:14:57 crc kubenswrapper[5136]: E0320 08:14:57.543375 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed\": container with ID starting with ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed not found: ID does not exist" containerID="ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.543423 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed"} err="failed to get container status \"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed\": rpc error: code = NotFound desc = could not find container \"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed\": container with ID starting with ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed not found: ID does not exist" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.543479 5136 scope.go:117] "RemoveContainer" containerID="af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee" Mar 20 08:14:57 crc kubenswrapper[5136]: E0320 08:14:57.544045 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee\": container with ID starting with af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee not found: ID does not exist" containerID="af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.544078 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee"} err="failed to get container status \"af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee\": rpc error: code = NotFound desc = could not find container \"af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee\": container with ID starting with af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee not found: ID does not exist" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.544123 5136 scope.go:117] "RemoveContainer" containerID="84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093" Mar 20 08:14:57 crc kubenswrapper[5136]: E0320 08:14:57.544645 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093\": container with ID starting with 84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093 not found: ID does not exist" containerID="84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.544726 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093"} err="failed to get container status \"84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093\": rpc error: code = NotFound desc = could not find container \"84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093\": container with ID starting with 84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093 not found: ID does not exist" Mar 20 08:14:58 crc kubenswrapper[5136]: I0320 08:14:58.410924 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" path="/var/lib/kubelet/pods/7c24947f-946c-46e2-b9f5-0ec67f66a8fa/volumes" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.155519 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms"] Mar 20 08:15:00 crc kubenswrapper[5136]: E0320 08:15:00.155893 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="extract-content" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.155913 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="extract-content" Mar 20 08:15:00 crc kubenswrapper[5136]: E0320 08:15:00.155949 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="registry-server" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.155961 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="registry-server" Mar 20 08:15:00 crc kubenswrapper[5136]: E0320 08:15:00.155983 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="extract-utilities" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.155993 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="extract-utilities" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.156227 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="registry-server" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.156909 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.159188 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.159864 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.172514 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms"] Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.224134 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.224267 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.224489 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwnk4\" (UniqueName: \"kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.326295 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.326415 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwnk4\" (UniqueName: \"kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.326568 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.327585 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.339596 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.348595 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwnk4\" (UniqueName: \"kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.487248 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.742047 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms"] Mar 20 08:15:01 crc kubenswrapper[5136]: I0320 08:15:01.478291 5136 generic.go:334] "Generic (PLEG): container finished" podID="02161682-1526-46e0-aaa6-d09c6758943c" containerID="d7c966f182c94b6eabaca701ac9e2f115b1d66510a14ffb108fa112317b9c2d8" exitCode=0 Mar 20 08:15:01 crc kubenswrapper[5136]: I0320 08:15:01.478343 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" event={"ID":"02161682-1526-46e0-aaa6-d09c6758943c","Type":"ContainerDied","Data":"d7c966f182c94b6eabaca701ac9e2f115b1d66510a14ffb108fa112317b9c2d8"} Mar 20 08:15:01 crc kubenswrapper[5136]: I0320 08:15:01.478667 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" event={"ID":"02161682-1526-46e0-aaa6-d09c6758943c","Type":"ContainerStarted","Data":"dfa3996b6fa9b12331f1b2253e9996c6494379a9aacdaf656de0c773aca3ee4c"} Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.838587 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.985151 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume\") pod \"02161682-1526-46e0-aaa6-d09c6758943c\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.985296 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwnk4\" (UniqueName: \"kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4\") pod \"02161682-1526-46e0-aaa6-d09c6758943c\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.985408 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume\") pod \"02161682-1526-46e0-aaa6-d09c6758943c\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.987770 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume" (OuterVolumeSpecName: "config-volume") pod "02161682-1526-46e0-aaa6-d09c6758943c" (UID: "02161682-1526-46e0-aaa6-d09c6758943c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.993263 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "02161682-1526-46e0-aaa6-d09c6758943c" (UID: "02161682-1526-46e0-aaa6-d09c6758943c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.993892 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4" (OuterVolumeSpecName: "kube-api-access-hwnk4") pod "02161682-1526-46e0-aaa6-d09c6758943c" (UID: "02161682-1526-46e0-aaa6-d09c6758943c"). InnerVolumeSpecName "kube-api-access-hwnk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.087432 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.087480 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.087533 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwnk4\" (UniqueName: \"kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4\") on node \"crc\" DevicePath \"\"" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.498055 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" event={"ID":"02161682-1526-46e0-aaa6-d09c6758943c","Type":"ContainerDied","Data":"dfa3996b6fa9b12331f1b2253e9996c6494379a9aacdaf656de0c773aca3ee4c"} Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.498579 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfa3996b6fa9b12331f1b2253e9996c6494379a9aacdaf656de0c773aca3ee4c" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.498162 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.922630 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz"] Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.927935 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz"] Mar 20 08:15:04 crc kubenswrapper[5136]: I0320 08:15:04.404565 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d251ba65-cac2-4d94-b882-672d97a85bc7" path="/var/lib/kubelet/pods/d251ba65-cac2-4d94-b882-672d97a85bc7/volumes" Mar 20 08:15:15 crc kubenswrapper[5136]: I0320 08:15:15.822144 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:15:15 crc kubenswrapper[5136]: I0320 08:15:15.822721 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:15:36 crc kubenswrapper[5136]: I0320 08:15:36.513756 5136 scope.go:117] "RemoveContainer" containerID="ca492a12ee4dfef81804d9a43645add86ef8ab0ce16812e4c74a09d17ae0ea3c" Mar 20 08:15:45 crc kubenswrapper[5136]: I0320 08:15:45.822340 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:15:45 crc kubenswrapper[5136]: I0320 08:15:45.823390 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:15:45 crc kubenswrapper[5136]: I0320 08:15:45.823460 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:15:45 crc kubenswrapper[5136]: I0320 08:15:45.824629 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:15:45 crc kubenswrapper[5136]: I0320 08:15:45.824773 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14" gracePeriod=600 Mar 20 08:15:46 crc kubenswrapper[5136]: I0320 08:15:46.852537 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14" exitCode=0 Mar 20 08:15:46 crc kubenswrapper[5136]: I0320 08:15:46.852745 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14"} Mar 20 08:15:46 crc kubenswrapper[5136]: I0320 08:15:46.853045 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2"} Mar 20 08:15:46 crc kubenswrapper[5136]: I0320 08:15:46.853080 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.168278 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566576-lnp57"] Mar 20 08:16:00 crc kubenswrapper[5136]: E0320 08:16:00.170899 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02161682-1526-46e0-aaa6-d09c6758943c" containerName="collect-profiles" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.171113 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="02161682-1526-46e0-aaa6-d09c6758943c" containerName="collect-profiles" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.171541 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="02161682-1526-46e0-aaa6-d09c6758943c" containerName="collect-profiles" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.172480 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.175753 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.176167 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.179159 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.180103 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566576-lnp57"] Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.357569 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vwl7\" (UniqueName: \"kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7\") pod \"auto-csr-approver-29566576-lnp57\" (UID: \"cc65bb98-68d8-471f-82de-50eba3ccfd7d\") " pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.459386 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vwl7\" (UniqueName: \"kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7\") pod \"auto-csr-approver-29566576-lnp57\" (UID: \"cc65bb98-68d8-471f-82de-50eba3ccfd7d\") " pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.486856 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vwl7\" (UniqueName: \"kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7\") pod \"auto-csr-approver-29566576-lnp57\" (UID: \"cc65bb98-68d8-471f-82de-50eba3ccfd7d\") " pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.497896 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.810280 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566576-lnp57"] Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.962773 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566576-lnp57" event={"ID":"cc65bb98-68d8-471f-82de-50eba3ccfd7d","Type":"ContainerStarted","Data":"3b60df99619254a8fb5e9271c49f5f3626f2ec90aa4ae01070cc7bdd6c03e997"} Mar 20 08:16:01 crc kubenswrapper[5136]: I0320 08:16:01.971125 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566576-lnp57" event={"ID":"cc65bb98-68d8-471f-82de-50eba3ccfd7d","Type":"ContainerStarted","Data":"62cda2bd643833622def1d9629824c17e3df9ab31f50b1b04bb053644f55653c"} Mar 20 08:16:01 crc kubenswrapper[5136]: I0320 08:16:01.986486 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566576-lnp57" podStartSLOduration=1.191945061 podStartE2EDuration="1.986455182s" podCreationTimestamp="2026-03-20 08:16:00 +0000 UTC" firstStartedPulling="2026-03-20 08:16:00.815047379 +0000 UTC m=+5193.074358530" lastFinishedPulling="2026-03-20 08:16:01.60955746 +0000 UTC m=+5193.868868651" observedRunningTime="2026-03-20 08:16:01.985327098 +0000 UTC m=+5194.244638259" watchObservedRunningTime="2026-03-20 08:16:01.986455182 +0000 UTC m=+5194.245766333" Mar 20 08:16:02 crc kubenswrapper[5136]: I0320 08:16:02.981957 5136 generic.go:334] "Generic (PLEG): container finished" podID="cc65bb98-68d8-471f-82de-50eba3ccfd7d" containerID="62cda2bd643833622def1d9629824c17e3df9ab31f50b1b04bb053644f55653c" exitCode=0 Mar 20 08:16:02 crc kubenswrapper[5136]: I0320 08:16:02.982090 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566576-lnp57" event={"ID":"cc65bb98-68d8-471f-82de-50eba3ccfd7d","Type":"ContainerDied","Data":"62cda2bd643833622def1d9629824c17e3df9ab31f50b1b04bb053644f55653c"} Mar 20 08:16:04 crc kubenswrapper[5136]: I0320 08:16:04.263340 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:04 crc kubenswrapper[5136]: I0320 08:16:04.411977 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vwl7\" (UniqueName: \"kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7\") pod \"cc65bb98-68d8-471f-82de-50eba3ccfd7d\" (UID: \"cc65bb98-68d8-471f-82de-50eba3ccfd7d\") " Mar 20 08:16:04 crc kubenswrapper[5136]: I0320 08:16:04.420955 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7" (OuterVolumeSpecName: "kube-api-access-6vwl7") pod "cc65bb98-68d8-471f-82de-50eba3ccfd7d" (UID: "cc65bb98-68d8-471f-82de-50eba3ccfd7d"). InnerVolumeSpecName "kube-api-access-6vwl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:16:04 crc kubenswrapper[5136]: I0320 08:16:04.513356 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vwl7\" (UniqueName: \"kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7\") on node \"crc\" DevicePath \"\"" Mar 20 08:16:05 crc kubenswrapper[5136]: I0320 08:16:05.011323 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566576-lnp57" event={"ID":"cc65bb98-68d8-471f-82de-50eba3ccfd7d","Type":"ContainerDied","Data":"3b60df99619254a8fb5e9271c49f5f3626f2ec90aa4ae01070cc7bdd6c03e997"} Mar 20 08:16:05 crc kubenswrapper[5136]: I0320 08:16:05.011368 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b60df99619254a8fb5e9271c49f5f3626f2ec90aa4ae01070cc7bdd6c03e997" Mar 20 08:16:05 crc kubenswrapper[5136]: I0320 08:16:05.011437 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:05 crc kubenswrapper[5136]: I0320 08:16:05.084179 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566570-hgkrr"] Mar 20 08:16:05 crc kubenswrapper[5136]: I0320 08:16:05.091906 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566570-hgkrr"] Mar 20 08:16:06 crc kubenswrapper[5136]: I0320 08:16:06.405554 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04302f0d-411c-49b0-8682-e64bb02c697d" path="/var/lib/kubelet/pods/04302f0d-411c-49b0-8682-e64bb02c697d/volumes" Mar 20 08:16:36 crc kubenswrapper[5136]: I0320 08:16:36.579888 5136 scope.go:117] "RemoveContainer" containerID="79d8ba8cc4cde24163fe3c378a9767a3b425536e543bf53f50b55aaf7f5ba019" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.148154 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566578-ck7bv"] Mar 20 08:18:00 crc kubenswrapper[5136]: E0320 08:18:00.149286 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc65bb98-68d8-471f-82de-50eba3ccfd7d" containerName="oc" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.149309 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc65bb98-68d8-471f-82de-50eba3ccfd7d" containerName="oc" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.149567 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc65bb98-68d8-471f-82de-50eba3ccfd7d" containerName="oc" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.150334 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.153460 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.153800 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.154624 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.170846 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566578-ck7bv"] Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.292244 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkvlx\" (UniqueName: \"kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx\") pod \"auto-csr-approver-29566578-ck7bv\" (UID: \"e55d749f-3c3e-4558-bf74-28a388d382bf\") " pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.393552 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkvlx\" (UniqueName: \"kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx\") pod \"auto-csr-approver-29566578-ck7bv\" (UID: \"e55d749f-3c3e-4558-bf74-28a388d382bf\") " pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.416356 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkvlx\" (UniqueName: \"kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx\") pod \"auto-csr-approver-29566578-ck7bv\" (UID: \"e55d749f-3c3e-4558-bf74-28a388d382bf\") " pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.474964 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.865050 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566578-ck7bv"] Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.948333 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" event={"ID":"e55d749f-3c3e-4558-bf74-28a388d382bf","Type":"ContainerStarted","Data":"a49c32499696a833258bc75f86dccb4a476c756aeddc3a977aabab111a094291"} Mar 20 08:18:02 crc kubenswrapper[5136]: I0320 08:18:02.969372 5136 generic.go:334] "Generic (PLEG): container finished" podID="e55d749f-3c3e-4558-bf74-28a388d382bf" containerID="4e591fa4a3dca1b218c4cdb5e1a7771e6b255424f6feef9dba08785df4cee785" exitCode=0 Mar 20 08:18:02 crc kubenswrapper[5136]: I0320 08:18:02.969455 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" event={"ID":"e55d749f-3c3e-4558-bf74-28a388d382bf","Type":"ContainerDied","Data":"4e591fa4a3dca1b218c4cdb5e1a7771e6b255424f6feef9dba08785df4cee785"} Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.264062 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.353002 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkvlx\" (UniqueName: \"kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx\") pod \"e55d749f-3c3e-4558-bf74-28a388d382bf\" (UID: \"e55d749f-3c3e-4558-bf74-28a388d382bf\") " Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.357567 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx" (OuterVolumeSpecName: "kube-api-access-xkvlx") pod "e55d749f-3c3e-4558-bf74-28a388d382bf" (UID: "e55d749f-3c3e-4558-bf74-28a388d382bf"). InnerVolumeSpecName "kube-api-access-xkvlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.455406 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkvlx\" (UniqueName: \"kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx\") on node \"crc\" DevicePath \"\"" Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.993553 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" event={"ID":"e55d749f-3c3e-4558-bf74-28a388d382bf","Type":"ContainerDied","Data":"a49c32499696a833258bc75f86dccb4a476c756aeddc3a977aabab111a094291"} Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.993992 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a49c32499696a833258bc75f86dccb4a476c756aeddc3a977aabab111a094291" Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.993664 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:05 crc kubenswrapper[5136]: I0320 08:18:05.346726 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566572-59rnc"] Mar 20 08:18:05 crc kubenswrapper[5136]: I0320 08:18:05.353474 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566572-59rnc"] Mar 20 08:18:06 crc kubenswrapper[5136]: I0320 08:18:06.408363 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e251183d-ffbf-414f-9d88-5830637722be" path="/var/lib/kubelet/pods/e251183d-ffbf-414f-9d88-5830637722be/volumes" Mar 20 08:18:15 crc kubenswrapper[5136]: I0320 08:18:15.821723 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:18:15 crc kubenswrapper[5136]: I0320 08:18:15.822035 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:18:24 crc kubenswrapper[5136]: I0320 08:18:24.977349 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:24 crc kubenswrapper[5136]: E0320 08:18:24.978319 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55d749f-3c3e-4558-bf74-28a388d382bf" containerName="oc" Mar 20 08:18:24 crc kubenswrapper[5136]: I0320 08:18:24.978336 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55d749f-3c3e-4558-bf74-28a388d382bf" containerName="oc" Mar 20 08:18:24 crc kubenswrapper[5136]: I0320 08:18:24.978554 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55d749f-3c3e-4558-bf74-28a388d382bf" containerName="oc" Mar 20 08:18:24 crc kubenswrapper[5136]: I0320 08:18:24.979712 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:24 crc kubenswrapper[5136]: I0320 08:18:24.993389 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.055011 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.055072 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjt5\" (UniqueName: \"kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.055178 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.156755 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.156833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rjt5\" (UniqueName: \"kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.156895 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.157358 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.157393 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.175702 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rjt5\" (UniqueName: \"kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.307965 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.769263 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:26 crc kubenswrapper[5136]: I0320 08:18:26.143853 5136 generic.go:334] "Generic (PLEG): container finished" podID="45d705e0-1d52-414a-95c1-d625388034ae" containerID="c0fe46c7f40beefa74bdd7879eb4d7de32376ed40fdd587487c7654069b3605e" exitCode=0 Mar 20 08:18:26 crc kubenswrapper[5136]: I0320 08:18:26.143938 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerDied","Data":"c0fe46c7f40beefa74bdd7879eb4d7de32376ed40fdd587487c7654069b3605e"} Mar 20 08:18:26 crc kubenswrapper[5136]: I0320 08:18:26.144059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerStarted","Data":"868823b9edf432ae2a6b2585db0f683f4f0e4f95b09682fbd28eda38b3c71ee1"} Mar 20 08:18:27 crc kubenswrapper[5136]: I0320 08:18:27.151737 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerStarted","Data":"936e4342d831aaf10446aba0c4a9e359a71632144ccd50a2b23b48530c6ba66e"} Mar 20 08:18:28 crc kubenswrapper[5136]: I0320 08:18:28.160149 5136 generic.go:334] "Generic (PLEG): container finished" podID="45d705e0-1d52-414a-95c1-d625388034ae" containerID="936e4342d831aaf10446aba0c4a9e359a71632144ccd50a2b23b48530c6ba66e" exitCode=0 Mar 20 08:18:28 crc kubenswrapper[5136]: I0320 08:18:28.160202 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerDied","Data":"936e4342d831aaf10446aba0c4a9e359a71632144ccd50a2b23b48530c6ba66e"} Mar 20 08:18:29 crc kubenswrapper[5136]: I0320 08:18:29.168162 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerStarted","Data":"b31adcbc2f2a9875bb6e8e8cadb976ad6fe39c73d6153ce09e6cdc804ea2ad6f"} Mar 20 08:18:29 crc kubenswrapper[5136]: I0320 08:18:29.190187 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2whxt" podStartSLOduration=2.657079852 podStartE2EDuration="5.190168411s" podCreationTimestamp="2026-03-20 08:18:24 +0000 UTC" firstStartedPulling="2026-03-20 08:18:26.145809053 +0000 UTC m=+5338.405120204" lastFinishedPulling="2026-03-20 08:18:28.678897612 +0000 UTC m=+5340.938208763" observedRunningTime="2026-03-20 08:18:29.184677861 +0000 UTC m=+5341.443989012" watchObservedRunningTime="2026-03-20 08:18:29.190168411 +0000 UTC m=+5341.449479562" Mar 20 08:18:35 crc kubenswrapper[5136]: I0320 08:18:35.308337 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:35 crc kubenswrapper[5136]: I0320 08:18:35.308992 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:35 crc kubenswrapper[5136]: I0320 08:18:35.381442 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:36 crc kubenswrapper[5136]: I0320 08:18:36.287118 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:36 crc kubenswrapper[5136]: I0320 08:18:36.341515 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:36 crc kubenswrapper[5136]: I0320 08:18:36.655397 5136 scope.go:117] "RemoveContainer" containerID="b9689c90ff59dd42fc4279d62977b62b1e34234f2f912a119c0aaec47a889e16" Mar 20 08:18:38 crc kubenswrapper[5136]: I0320 08:18:38.250667 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2whxt" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="registry-server" containerID="cri-o://b31adcbc2f2a9875bb6e8e8cadb976ad6fe39c73d6153ce09e6cdc804ea2ad6f" gracePeriod=2 Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.263328 5136 generic.go:334] "Generic (PLEG): container finished" podID="45d705e0-1d52-414a-95c1-d625388034ae" containerID="b31adcbc2f2a9875bb6e8e8cadb976ad6fe39c73d6153ce09e6cdc804ea2ad6f" exitCode=0 Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.263384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerDied","Data":"b31adcbc2f2a9875bb6e8e8cadb976ad6fe39c73d6153ce09e6cdc804ea2ad6f"} Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.735051 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.865960 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rjt5\" (UniqueName: \"kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5\") pod \"45d705e0-1d52-414a-95c1-d625388034ae\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.866102 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities\") pod \"45d705e0-1d52-414a-95c1-d625388034ae\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.866139 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content\") pod \"45d705e0-1d52-414a-95c1-d625388034ae\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.867810 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities" (OuterVolumeSpecName: "utilities") pod "45d705e0-1d52-414a-95c1-d625388034ae" (UID: "45d705e0-1d52-414a-95c1-d625388034ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.871501 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5" (OuterVolumeSpecName: "kube-api-access-2rjt5") pod "45d705e0-1d52-414a-95c1-d625388034ae" (UID: "45d705e0-1d52-414a-95c1-d625388034ae"). InnerVolumeSpecName "kube-api-access-2rjt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.968054 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rjt5\" (UniqueName: \"kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5\") on node \"crc\" DevicePath \"\"" Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.968337 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.998477 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45d705e0-1d52-414a-95c1-d625388034ae" (UID: "45d705e0-1d52-414a-95c1-d625388034ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.069564 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.274217 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerDied","Data":"868823b9edf432ae2a6b2585db0f683f4f0e4f95b09682fbd28eda38b3c71ee1"} Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.274298 5136 scope.go:117] "RemoveContainer" containerID="b31adcbc2f2a9875bb6e8e8cadb976ad6fe39c73d6153ce09e6cdc804ea2ad6f" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.274426 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.297899 5136 scope.go:117] "RemoveContainer" containerID="936e4342d831aaf10446aba0c4a9e359a71632144ccd50a2b23b48530c6ba66e" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.323382 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.332912 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.338683 5136 scope.go:117] "RemoveContainer" containerID="c0fe46c7f40beefa74bdd7879eb4d7de32376ed40fdd587487c7654069b3605e" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.412730 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d705e0-1d52-414a-95c1-d625388034ae" path="/var/lib/kubelet/pods/45d705e0-1d52-414a-95c1-d625388034ae/volumes" Mar 20 08:18:45 crc kubenswrapper[5136]: I0320 08:18:45.822161 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:18:45 crc kubenswrapper[5136]: I0320 08:18:45.822751 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:19:15 crc kubenswrapper[5136]: I0320 08:19:15.821710 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:19:15 crc kubenswrapper[5136]: I0320 08:19:15.822447 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:19:15 crc kubenswrapper[5136]: I0320 08:19:15.822512 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:19:15 crc kubenswrapper[5136]: I0320 08:19:15.823390 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:19:15 crc kubenswrapper[5136]: I0320 08:19:15.823459 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" gracePeriod=600 Mar 20 08:19:15 crc kubenswrapper[5136]: E0320 08:19:15.956341 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:19:16 crc kubenswrapper[5136]: I0320 08:19:16.563602 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" exitCode=0 Mar 20 08:19:16 crc kubenswrapper[5136]: I0320 08:19:16.563682 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2"} Mar 20 08:19:16 crc kubenswrapper[5136]: I0320 08:19:16.563742 5136 scope.go:117] "RemoveContainer" containerID="dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14" Mar 20 08:19:16 crc kubenswrapper[5136]: I0320 08:19:16.564532 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:19:16 crc kubenswrapper[5136]: E0320 08:19:16.565029 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:19:28 crc kubenswrapper[5136]: I0320 08:19:28.408275 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:19:28 crc kubenswrapper[5136]: E0320 08:19:28.409025 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:19:41 crc kubenswrapper[5136]: I0320 08:19:41.396601 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:19:41 crc kubenswrapper[5136]: E0320 08:19:41.397617 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:19:53 crc kubenswrapper[5136]: I0320 08:19:53.397059 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:19:53 crc kubenswrapper[5136]: E0320 08:19:53.398062 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.139121 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566580-gxgs4"] Mar 20 08:20:00 crc kubenswrapper[5136]: E0320 08:20:00.139688 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="extract-utilities" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.139701 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="extract-utilities" Mar 20 08:20:00 crc kubenswrapper[5136]: E0320 08:20:00.139724 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="extract-content" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.139731 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="extract-content" Mar 20 08:20:00 crc kubenswrapper[5136]: E0320 08:20:00.139744 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="registry-server" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.139751 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="registry-server" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.139910 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="registry-server" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.140330 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.142556 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.142957 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.143189 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.147511 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn9z8\" (UniqueName: \"kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8\") pod \"auto-csr-approver-29566580-gxgs4\" (UID: \"c13adcfb-f420-46c9-bbde-3350b761780e\") " pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.150297 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566580-gxgs4"] Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.249092 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn9z8\" (UniqueName: \"kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8\") pod \"auto-csr-approver-29566580-gxgs4\" (UID: \"c13adcfb-f420-46c9-bbde-3350b761780e\") " pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.265698 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn9z8\" (UniqueName: \"kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8\") pod \"auto-csr-approver-29566580-gxgs4\" (UID: \"c13adcfb-f420-46c9-bbde-3350b761780e\") " pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.462804 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.891507 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566580-gxgs4"] Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.901636 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.916981 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" event={"ID":"c13adcfb-f420-46c9-bbde-3350b761780e","Type":"ContainerStarted","Data":"279516a51f639e814182eed36dc9ad81e44bcadd81cdd2484fd0d7eaccc139bf"} Mar 20 08:20:02 crc kubenswrapper[5136]: I0320 08:20:02.935514 5136 generic.go:334] "Generic (PLEG): container finished" podID="c13adcfb-f420-46c9-bbde-3350b761780e" containerID="49a3dc4dd1c8ed19d1dc39a8fdd22be54b526fe4de3c63ff0b1f4d1ea3e0a979" exitCode=0 Mar 20 08:20:02 crc kubenswrapper[5136]: I0320 08:20:02.935563 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" event={"ID":"c13adcfb-f420-46c9-bbde-3350b761780e","Type":"ContainerDied","Data":"49a3dc4dd1c8ed19d1dc39a8fdd22be54b526fe4de3c63ff0b1f4d1ea3e0a979"} Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.242788 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.420435 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn9z8\" (UniqueName: \"kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8\") pod \"c13adcfb-f420-46c9-bbde-3350b761780e\" (UID: \"c13adcfb-f420-46c9-bbde-3350b761780e\") " Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.456012 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8" (OuterVolumeSpecName: "kube-api-access-tn9z8") pod "c13adcfb-f420-46c9-bbde-3350b761780e" (UID: "c13adcfb-f420-46c9-bbde-3350b761780e"). InnerVolumeSpecName "kube-api-access-tn9z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.522435 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn9z8\" (UniqueName: \"kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8\") on node \"crc\" DevicePath \"\"" Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.958882 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" event={"ID":"c13adcfb-f420-46c9-bbde-3350b761780e","Type":"ContainerDied","Data":"279516a51f639e814182eed36dc9ad81e44bcadd81cdd2484fd0d7eaccc139bf"} Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.958931 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279516a51f639e814182eed36dc9ad81e44bcadd81cdd2484fd0d7eaccc139bf" Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.958988 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:05 crc kubenswrapper[5136]: I0320 08:20:05.311480 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566574-kgksp"] Mar 20 08:20:05 crc kubenswrapper[5136]: I0320 08:20:05.317259 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566574-kgksp"] Mar 20 08:20:06 crc kubenswrapper[5136]: I0320 08:20:06.405502 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa57e02b-5eb6-401e-997d-a451c285486e" path="/var/lib/kubelet/pods/aa57e02b-5eb6-401e-997d-a451c285486e/volumes" Mar 20 08:20:07 crc kubenswrapper[5136]: I0320 08:20:07.396941 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:20:07 crc kubenswrapper[5136]: E0320 08:20:07.397625 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:20:20 crc kubenswrapper[5136]: I0320 08:20:20.396334 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:20:20 crc kubenswrapper[5136]: E0320 08:20:20.396975 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:20:35 crc kubenswrapper[5136]: I0320 08:20:35.397978 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:20:35 crc kubenswrapper[5136]: E0320 08:20:35.399476 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:20:36 crc kubenswrapper[5136]: I0320 08:20:36.761588 5136 scope.go:117] "RemoveContainer" containerID="f86c6f9b4ef14a2e3961efc76d0057737a2a18ff9c282b67f83d05bca07f26b4" Mar 20 08:20:47 crc kubenswrapper[5136]: I0320 08:20:47.397188 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:20:47 crc kubenswrapper[5136]: E0320 08:20:47.398290 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:02 crc kubenswrapper[5136]: I0320 08:21:02.396659 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:21:02 crc kubenswrapper[5136]: E0320 08:21:02.397391 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:17 crc kubenswrapper[5136]: I0320 08:21:17.396092 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:21:17 crc kubenswrapper[5136]: E0320 08:21:17.396941 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:32 crc kubenswrapper[5136]: I0320 08:21:32.396887 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:21:32 crc kubenswrapper[5136]: E0320 08:21:32.397612 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:44 crc kubenswrapper[5136]: I0320 08:21:44.399611 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:21:44 crc kubenswrapper[5136]: E0320 08:21:44.400274 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.396556 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:21:57 crc kubenswrapper[5136]: E0320 08:21:57.397419 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.959085 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:21:57 crc kubenswrapper[5136]: E0320 08:21:57.959364 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c13adcfb-f420-46c9-bbde-3350b761780e" containerName="oc" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.959377 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13adcfb-f420-46c9-bbde-3350b761780e" containerName="oc" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.959515 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c13adcfb-f420-46c9-bbde-3350b761780e" containerName="oc" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.960452 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.971867 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.058697 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9jp4\" (UniqueName: \"kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.059124 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.059247 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.160692 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.160787 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9jp4\" (UniqueName: \"kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.160938 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.161130 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.161508 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.179809 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9jp4\" (UniqueName: \"kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.285189 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.699771 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.819533 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerStarted","Data":"1f24178cc58c43551a444560f38eea1e86a4ab00ea0a51318dbd7c3bf67fbd4d"} Mar 20 08:21:59 crc kubenswrapper[5136]: I0320 08:21:59.832417 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerDied","Data":"54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2"} Mar 20 08:21:59 crc kubenswrapper[5136]: I0320 08:21:59.832445 5136 generic.go:334] "Generic (PLEG): container finished" podID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerID="54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2" exitCode=0 Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.155336 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566582-wv2dn"] Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.157589 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.160094 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.160322 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.160519 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.163699 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566582-wv2dn"] Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.194099 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-854lw\" (UniqueName: \"kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw\") pod \"auto-csr-approver-29566582-wv2dn\" (UID: \"36fd17ca-2655-4388-807a-3740ab031402\") " pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.295414 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-854lw\" (UniqueName: \"kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw\") pod \"auto-csr-approver-29566582-wv2dn\" (UID: \"36fd17ca-2655-4388-807a-3740ab031402\") " pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.312925 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-854lw\" (UniqueName: \"kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw\") pod \"auto-csr-approver-29566582-wv2dn\" (UID: \"36fd17ca-2655-4388-807a-3740ab031402\") " pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.488496 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.845062 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerStarted","Data":"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad"} Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.940646 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566582-wv2dn"] Mar 20 08:22:00 crc kubenswrapper[5136]: W0320 08:22:00.992558 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36fd17ca_2655_4388_807a_3740ab031402.slice/crio-ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce WatchSource:0}: Error finding container ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce: Status 404 returned error can't find the container with id ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce Mar 20 08:22:01 crc kubenswrapper[5136]: I0320 08:22:01.854623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" event={"ID":"36fd17ca-2655-4388-807a-3740ab031402","Type":"ContainerStarted","Data":"ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce"} Mar 20 08:22:01 crc kubenswrapper[5136]: I0320 08:22:01.857299 5136 generic.go:334] "Generic (PLEG): container finished" podID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerID="23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad" exitCode=0 Mar 20 08:22:01 crc kubenswrapper[5136]: I0320 08:22:01.857336 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerDied","Data":"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad"} Mar 20 08:22:02 crc kubenswrapper[5136]: I0320 08:22:02.869204 5136 generic.go:334] "Generic (PLEG): container finished" podID="36fd17ca-2655-4388-807a-3740ab031402" containerID="a529e171ce4e400e77421cdd13032062bdb0c3099972bc7c31cdc0391d1d0584" exitCode=0 Mar 20 08:22:02 crc kubenswrapper[5136]: I0320 08:22:02.869306 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" event={"ID":"36fd17ca-2655-4388-807a-3740ab031402","Type":"ContainerDied","Data":"a529e171ce4e400e77421cdd13032062bdb0c3099972bc7c31cdc0391d1d0584"} Mar 20 08:22:02 crc kubenswrapper[5136]: I0320 08:22:02.874389 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerStarted","Data":"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda"} Mar 20 08:22:02 crc kubenswrapper[5136]: I0320 08:22:02.918510 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-68w4f" podStartSLOduration=3.469183959 podStartE2EDuration="5.918485243s" podCreationTimestamp="2026-03-20 08:21:57 +0000 UTC" firstStartedPulling="2026-03-20 08:21:59.834007026 +0000 UTC m=+5552.093318177" lastFinishedPulling="2026-03-20 08:22:02.28330827 +0000 UTC m=+5554.542619461" observedRunningTime="2026-03-20 08:22:02.914643716 +0000 UTC m=+5555.173954887" watchObservedRunningTime="2026-03-20 08:22:02.918485243 +0000 UTC m=+5555.177796404" Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.141381 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.248715 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-854lw\" (UniqueName: \"kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw\") pod \"36fd17ca-2655-4388-807a-3740ab031402\" (UID: \"36fd17ca-2655-4388-807a-3740ab031402\") " Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.257600 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw" (OuterVolumeSpecName: "kube-api-access-854lw") pod "36fd17ca-2655-4388-807a-3740ab031402" (UID: "36fd17ca-2655-4388-807a-3740ab031402"). InnerVolumeSpecName "kube-api-access-854lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.350510 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-854lw\" (UniqueName: \"kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw\") on node \"crc\" DevicePath \"\"" Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.889194 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" event={"ID":"36fd17ca-2655-4388-807a-3740ab031402","Type":"ContainerDied","Data":"ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce"} Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.889239 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce" Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.889239 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:05 crc kubenswrapper[5136]: I0320 08:22:05.203548 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566576-lnp57"] Mar 20 08:22:05 crc kubenswrapper[5136]: I0320 08:22:05.209980 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566576-lnp57"] Mar 20 08:22:06 crc kubenswrapper[5136]: I0320 08:22:06.404147 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc65bb98-68d8-471f-82de-50eba3ccfd7d" path="/var/lib/kubelet/pods/cc65bb98-68d8-471f-82de-50eba3ccfd7d/volumes" Mar 20 08:22:08 crc kubenswrapper[5136]: I0320 08:22:08.286389 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:08 crc kubenswrapper[5136]: I0320 08:22:08.286507 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:08 crc kubenswrapper[5136]: I0320 08:22:08.371080 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:08 crc kubenswrapper[5136]: I0320 08:22:08.961767 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:09 crc kubenswrapper[5136]: I0320 08:22:09.396599 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:22:09 crc kubenswrapper[5136]: E0320 08:22:09.396835 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:22:09 crc kubenswrapper[5136]: I0320 08:22:09.830371 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:22:10 crc kubenswrapper[5136]: I0320 08:22:10.929336 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-68w4f" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="registry-server" containerID="cri-o://9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda" gracePeriod=2 Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.277971 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.349912 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content\") pod \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.350008 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9jp4\" (UniqueName: \"kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4\") pod \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.350095 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities\") pod \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.351302 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities" (OuterVolumeSpecName: "utilities") pod "080d55d0-394e-46a8-a6b9-7e6b7c5759de" (UID: "080d55d0-394e-46a8-a6b9-7e6b7c5759de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.355724 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4" (OuterVolumeSpecName: "kube-api-access-x9jp4") pod "080d55d0-394e-46a8-a6b9-7e6b7c5759de" (UID: "080d55d0-394e-46a8-a6b9-7e6b7c5759de"). InnerVolumeSpecName "kube-api-access-x9jp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.452027 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9jp4\" (UniqueName: \"kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4\") on node \"crc\" DevicePath \"\"" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.452055 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.639201 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "080d55d0-394e-46a8-a6b9-7e6b7c5759de" (UID: "080d55d0-394e-46a8-a6b9-7e6b7c5759de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.653620 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.939075 5136 generic.go:334] "Generic (PLEG): container finished" podID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerID="9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda" exitCode=0 Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.939136 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerDied","Data":"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda"} Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.939186 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerDied","Data":"1f24178cc58c43551a444560f38eea1e86a4ab00ea0a51318dbd7c3bf67fbd4d"} Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.939214 5136 scope.go:117] "RemoveContainer" containerID="9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.939237 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.977183 5136 scope.go:117] "RemoveContainer" containerID="23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.980224 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.987636 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.996784 5136 scope.go:117] "RemoveContainer" containerID="54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.017386 5136 scope.go:117] "RemoveContainer" containerID="9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda" Mar 20 08:22:12 crc kubenswrapper[5136]: E0320 08:22:12.017887 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda\": container with ID starting with 9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda not found: ID does not exist" containerID="9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.017960 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda"} err="failed to get container status \"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda\": rpc error: code = NotFound desc = could not find container \"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda\": container with ID starting with 9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda not found: ID does not exist" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.018005 5136 scope.go:117] "RemoveContainer" containerID="23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad" Mar 20 08:22:12 crc kubenswrapper[5136]: E0320 08:22:12.018313 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad\": container with ID starting with 23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad not found: ID does not exist" containerID="23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.018343 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad"} err="failed to get container status \"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad\": rpc error: code = NotFound desc = could not find container \"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad\": container with ID starting with 23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad not found: ID does not exist" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.018374 5136 scope.go:117] "RemoveContainer" containerID="54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2" Mar 20 08:22:12 crc kubenswrapper[5136]: E0320 08:22:12.018903 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2\": container with ID starting with 54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2 not found: ID does not exist" containerID="54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.018947 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2"} err="failed to get container status \"54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2\": rpc error: code = NotFound desc = could not find container \"54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2\": container with ID starting with 54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2 not found: ID does not exist" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.407440 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" path="/var/lib/kubelet/pods/080d55d0-394e-46a8-a6b9-7e6b7c5759de/volumes" Mar 20 08:22:24 crc kubenswrapper[5136]: I0320 08:22:24.397171 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:22:24 crc kubenswrapper[5136]: E0320 08:22:24.398128 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:22:36 crc kubenswrapper[5136]: I0320 08:22:36.833622 5136 scope.go:117] "RemoveContainer" containerID="62cda2bd643833622def1d9629824c17e3df9ab31f50b1b04bb053644f55653c" Mar 20 08:22:38 crc kubenswrapper[5136]: I0320 08:22:38.400750 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:22:38 crc kubenswrapper[5136]: E0320 08:22:38.401242 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:22:50 crc kubenswrapper[5136]: I0320 08:22:50.397298 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:22:50 crc kubenswrapper[5136]: E0320 08:22:50.398533 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:23:02 crc kubenswrapper[5136]: I0320 08:23:02.397043 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:23:02 crc kubenswrapper[5136]: E0320 08:23:02.398423 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:23:14 crc kubenswrapper[5136]: I0320 08:23:14.396501 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:23:14 crc kubenswrapper[5136]: E0320 08:23:14.397619 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:23:26 crc kubenswrapper[5136]: I0320 08:23:26.396570 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:23:26 crc kubenswrapper[5136]: E0320 08:23:26.397425 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:23:40 crc kubenswrapper[5136]: I0320 08:23:40.398739 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:23:40 crc kubenswrapper[5136]: E0320 08:23:40.399483 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:23:52 crc kubenswrapper[5136]: I0320 08:23:52.396519 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:23:52 crc kubenswrapper[5136]: E0320 08:23:52.397326 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.171286 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566584-ht4pj"] Mar 20 08:24:00 crc kubenswrapper[5136]: E0320 08:24:00.172750 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36fd17ca-2655-4388-807a-3740ab031402" containerName="oc" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.172782 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fd17ca-2655-4388-807a-3740ab031402" containerName="oc" Mar 20 08:24:00 crc kubenswrapper[5136]: E0320 08:24:00.172860 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="extract-content" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.172882 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="extract-content" Mar 20 08:24:00 crc kubenswrapper[5136]: E0320 08:24:00.172913 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="extract-utilities" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.172933 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="extract-utilities" Mar 20 08:24:00 crc kubenswrapper[5136]: E0320 08:24:00.172958 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="registry-server" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.172974 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="registry-server" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.173301 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="registry-server" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.173367 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="36fd17ca-2655-4388-807a-3740ab031402" containerName="oc" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.174365 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.178785 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.179260 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.179459 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.186341 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566584-ht4pj"] Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.294527 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsr2q\" (UniqueName: \"kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q\") pod \"auto-csr-approver-29566584-ht4pj\" (UID: \"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2\") " pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.396832 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsr2q\" (UniqueName: \"kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q\") pod \"auto-csr-approver-29566584-ht4pj\" (UID: \"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2\") " pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.416982 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsr2q\" (UniqueName: \"kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q\") pod \"auto-csr-approver-29566584-ht4pj\" (UID: \"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2\") " pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.505862 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.979402 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566584-ht4pj"] Mar 20 08:24:00 crc kubenswrapper[5136]: W0320 08:24:00.983359 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49bc2e6c_77f7_42f1_ba1e_86bbc6bdc2d2.slice/crio-0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c WatchSource:0}: Error finding container 0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c: Status 404 returned error can't find the container with id 0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c Mar 20 08:24:01 crc kubenswrapper[5136]: I0320 08:24:01.779493 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" event={"ID":"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2","Type":"ContainerStarted","Data":"0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c"} Mar 20 08:24:02 crc kubenswrapper[5136]: I0320 08:24:02.786468 5136 generic.go:334] "Generic (PLEG): container finished" podID="49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" containerID="b6e56033203d796df41b39eddfb04e55cd2822f9ba7f0e9edd26141d7d5d92b3" exitCode=0 Mar 20 08:24:02 crc kubenswrapper[5136]: I0320 08:24:02.786515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" event={"ID":"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2","Type":"ContainerDied","Data":"b6e56033203d796df41b39eddfb04e55cd2822f9ba7f0e9edd26141d7d5d92b3"} Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.081930 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.251025 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsr2q\" (UniqueName: \"kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q\") pod \"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2\" (UID: \"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2\") " Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.256617 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q" (OuterVolumeSpecName: "kube-api-access-nsr2q") pod "49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" (UID: "49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2"). InnerVolumeSpecName "kube-api-access-nsr2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.352404 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsr2q\" (UniqueName: \"kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q\") on node \"crc\" DevicePath \"\"" Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.801451 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" event={"ID":"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2","Type":"ContainerDied","Data":"0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c"} Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.801757 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c" Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.801496 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:05 crc kubenswrapper[5136]: I0320 08:24:05.146130 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566578-ck7bv"] Mar 20 08:24:05 crc kubenswrapper[5136]: I0320 08:24:05.151135 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566578-ck7bv"] Mar 20 08:24:06 crc kubenswrapper[5136]: I0320 08:24:06.396269 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:24:06 crc kubenswrapper[5136]: E0320 08:24:06.396481 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:24:06 crc kubenswrapper[5136]: I0320 08:24:06.416612 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55d749f-3c3e-4558-bf74-28a388d382bf" path="/var/lib/kubelet/pods/e55d749f-3c3e-4558-bf74-28a388d382bf/volumes" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.397009 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.867386 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:19 crc kubenswrapper[5136]: E0320 08:24:19.868576 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" containerName="oc" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.868672 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" containerName="oc" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.868936 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" containerName="oc" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.870238 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.883025 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.927251 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb"} Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.979888 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.979977 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.980043 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs4k2\" (UniqueName: \"kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.082192 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs4k2\" (UniqueName: \"kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.082309 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.082357 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.082828 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.084213 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.107351 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs4k2\" (UniqueName: \"kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.198148 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.537479 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.937428 5136 generic.go:334] "Generic (PLEG): container finished" podID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerID="26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a" exitCode=0 Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.937475 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerDied","Data":"26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a"} Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.937509 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerStarted","Data":"fd56c1fcd17605cb687c5c5f760a499825e07465feb19ffb8f74453066691848"} Mar 20 08:24:21 crc kubenswrapper[5136]: I0320 08:24:21.946855 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerStarted","Data":"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3"} Mar 20 08:24:22 crc kubenswrapper[5136]: I0320 08:24:22.955580 5136 generic.go:334] "Generic (PLEG): container finished" podID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerID="f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3" exitCode=0 Mar 20 08:24:22 crc kubenswrapper[5136]: I0320 08:24:22.955620 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerDied","Data":"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3"} Mar 20 08:24:23 crc kubenswrapper[5136]: I0320 08:24:23.966300 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerStarted","Data":"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462"} Mar 20 08:24:23 crc kubenswrapper[5136]: I0320 08:24:23.989933 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8dtk" podStartSLOduration=2.600853157 podStartE2EDuration="4.98989465s" podCreationTimestamp="2026-03-20 08:24:19 +0000 UTC" firstStartedPulling="2026-03-20 08:24:20.939087742 +0000 UTC m=+5693.198398913" lastFinishedPulling="2026-03-20 08:24:23.328129235 +0000 UTC m=+5695.587440406" observedRunningTime="2026-03-20 08:24:23.986110362 +0000 UTC m=+5696.245421513" watchObservedRunningTime="2026-03-20 08:24:23.98989465 +0000 UTC m=+5696.249205801" Mar 20 08:24:30 crc kubenswrapper[5136]: I0320 08:24:30.198594 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:30 crc kubenswrapper[5136]: I0320 08:24:30.199405 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:30 crc kubenswrapper[5136]: I0320 08:24:30.257387 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:31 crc kubenswrapper[5136]: I0320 08:24:31.081611 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:31 crc kubenswrapper[5136]: I0320 08:24:31.135269 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.028248 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m8dtk" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="registry-server" containerID="cri-o://125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462" gracePeriod=2 Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.429311 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.474464 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities\") pod \"db0a0224-28cb-4c0b-9679-87af0cc13cee\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.474847 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content\") pod \"db0a0224-28cb-4c0b-9679-87af0cc13cee\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.474933 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs4k2\" (UniqueName: \"kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2\") pod \"db0a0224-28cb-4c0b-9679-87af0cc13cee\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.475450 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities" (OuterVolumeSpecName: "utilities") pod "db0a0224-28cb-4c0b-9679-87af0cc13cee" (UID: "db0a0224-28cb-4c0b-9679-87af0cc13cee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.481108 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2" (OuterVolumeSpecName: "kube-api-access-bs4k2") pod "db0a0224-28cb-4c0b-9679-87af0cc13cee" (UID: "db0a0224-28cb-4c0b-9679-87af0cc13cee"). InnerVolumeSpecName "kube-api-access-bs4k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.524289 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db0a0224-28cb-4c0b-9679-87af0cc13cee" (UID: "db0a0224-28cb-4c0b-9679-87af0cc13cee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.575783 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs4k2\" (UniqueName: \"kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2\") on node \"crc\" DevicePath \"\"" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.575861 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.575872 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.035522 5136 generic.go:334] "Generic (PLEG): container finished" podID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerID="125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462" exitCode=0 Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.035562 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerDied","Data":"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462"} Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.035597 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerDied","Data":"fd56c1fcd17605cb687c5c5f760a499825e07465feb19ffb8f74453066691848"} Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.035609 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.035614 5136 scope.go:117] "RemoveContainer" containerID="125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.057079 5136 scope.go:117] "RemoveContainer" containerID="f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.063726 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.070269 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.084967 5136 scope.go:117] "RemoveContainer" containerID="26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.113170 5136 scope.go:117] "RemoveContainer" containerID="125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462" Mar 20 08:24:34 crc kubenswrapper[5136]: E0320 08:24:34.113657 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462\": container with ID starting with 125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462 not found: ID does not exist" containerID="125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.113709 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462"} err="failed to get container status \"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462\": rpc error: code = NotFound desc = could not find container \"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462\": container with ID starting with 125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462 not found: ID does not exist" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.113745 5136 scope.go:117] "RemoveContainer" containerID="f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3" Mar 20 08:24:34 crc kubenswrapper[5136]: E0320 08:24:34.114247 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3\": container with ID starting with f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3 not found: ID does not exist" containerID="f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.114308 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3"} err="failed to get container status \"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3\": rpc error: code = NotFound desc = could not find container \"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3\": container with ID starting with f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3 not found: ID does not exist" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.114336 5136 scope.go:117] "RemoveContainer" containerID="26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a" Mar 20 08:24:34 crc kubenswrapper[5136]: E0320 08:24:34.114765 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a\": container with ID starting with 26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a not found: ID does not exist" containerID="26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.114806 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a"} err="failed to get container status \"26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a\": rpc error: code = NotFound desc = could not find container \"26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a\": container with ID starting with 26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a not found: ID does not exist" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.404758 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" path="/var/lib/kubelet/pods/db0a0224-28cb-4c0b-9679-87af0cc13cee/volumes" Mar 20 08:24:36 crc kubenswrapper[5136]: I0320 08:24:36.923589 5136 scope.go:117] "RemoveContainer" containerID="4e591fa4a3dca1b218c4cdb5e1a7771e6b255424f6feef9dba08785df4cee785" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.266587 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:46 crc kubenswrapper[5136]: E0320 08:25:46.267959 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="registry-server" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.267988 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="registry-server" Mar 20 08:25:46 crc kubenswrapper[5136]: E0320 08:25:46.268009 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="extract-utilities" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.268022 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="extract-utilities" Mar 20 08:25:46 crc kubenswrapper[5136]: E0320 08:25:46.268051 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="extract-content" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.268065 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="extract-content" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.268311 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="registry-server" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.270093 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.280128 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.433413 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.433581 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.433849 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfmj\" (UniqueName: \"kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.535222 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfmj\" (UniqueName: \"kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.535304 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.535331 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.535785 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.535987 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.561740 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfmj\" (UniqueName: \"kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.588622 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.875742 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:47 crc kubenswrapper[5136]: I0320 08:25:47.650004 5136 generic.go:334] "Generic (PLEG): container finished" podID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerID="67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5" exitCode=0 Mar 20 08:25:47 crc kubenswrapper[5136]: I0320 08:25:47.650060 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerDied","Data":"67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5"} Mar 20 08:25:47 crc kubenswrapper[5136]: I0320 08:25:47.650098 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerStarted","Data":"a31991e44a39a5ee2d3f1743e87bc7b37d9628d9071253ddeec5aaddbef70cec"} Mar 20 08:25:47 crc kubenswrapper[5136]: I0320 08:25:47.652628 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:25:49 crc kubenswrapper[5136]: I0320 08:25:49.664847 5136 generic.go:334] "Generic (PLEG): container finished" podID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerID="fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4" exitCode=0 Mar 20 08:25:49 crc kubenswrapper[5136]: I0320 08:25:49.664954 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerDied","Data":"fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4"} Mar 20 08:25:50 crc kubenswrapper[5136]: I0320 08:25:50.673525 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerStarted","Data":"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934"} Mar 20 08:25:50 crc kubenswrapper[5136]: I0320 08:25:50.701564 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-crbpt" podStartSLOduration=2.187998989 podStartE2EDuration="4.701544435s" podCreationTimestamp="2026-03-20 08:25:46 +0000 UTC" firstStartedPulling="2026-03-20 08:25:47.652390898 +0000 UTC m=+5779.911702049" lastFinishedPulling="2026-03-20 08:25:50.165936344 +0000 UTC m=+5782.425247495" observedRunningTime="2026-03-20 08:25:50.692905898 +0000 UTC m=+5782.952217059" watchObservedRunningTime="2026-03-20 08:25:50.701544435 +0000 UTC m=+5782.960855596" Mar 20 08:25:56 crc kubenswrapper[5136]: I0320 08:25:56.589224 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:56 crc kubenswrapper[5136]: I0320 08:25:56.589282 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:56 crc kubenswrapper[5136]: I0320 08:25:56.651447 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:56 crc kubenswrapper[5136]: I0320 08:25:56.773667 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:57 crc kubenswrapper[5136]: I0320 08:25:57.640050 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:58 crc kubenswrapper[5136]: I0320 08:25:58.752232 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-crbpt" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="registry-server" containerID="cri-o://00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934" gracePeriod=2 Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.220800 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.408311 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content\") pod \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.408363 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities\") pod \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.408420 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwfmj\" (UniqueName: \"kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj\") pod \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.410676 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities" (OuterVolumeSpecName: "utilities") pod "86a4a5b4-ac25-409f-8ba9-e393aef21d43" (UID: "86a4a5b4-ac25-409f-8ba9-e393aef21d43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.416241 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj" (OuterVolumeSpecName: "kube-api-access-lwfmj") pod "86a4a5b4-ac25-409f-8ba9-e393aef21d43" (UID: "86a4a5b4-ac25-409f-8ba9-e393aef21d43"). InnerVolumeSpecName "kube-api-access-lwfmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.435967 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86a4a5b4-ac25-409f-8ba9-e393aef21d43" (UID: "86a4a5b4-ac25-409f-8ba9-e393aef21d43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.509519 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwfmj\" (UniqueName: \"kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj\") on node \"crc\" DevicePath \"\"" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.509555 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.509567 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.763314 5136 generic.go:334] "Generic (PLEG): container finished" podID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerID="00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934" exitCode=0 Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.763967 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerDied","Data":"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934"} Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.764563 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerDied","Data":"a31991e44a39a5ee2d3f1743e87bc7b37d9628d9071253ddeec5aaddbef70cec"} Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.764663 5136 scope.go:117] "RemoveContainer" containerID="00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.764099 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.796175 5136 scope.go:117] "RemoveContainer" containerID="fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.801222 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.809044 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.832193 5136 scope.go:117] "RemoveContainer" containerID="67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.851493 5136 scope.go:117] "RemoveContainer" containerID="00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934" Mar 20 08:25:59 crc kubenswrapper[5136]: E0320 08:25:59.853427 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934\": container with ID starting with 00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934 not found: ID does not exist" containerID="00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.853473 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934"} err="failed to get container status \"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934\": rpc error: code = NotFound desc = could not find container \"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934\": container with ID starting with 00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934 not found: ID does not exist" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.853501 5136 scope.go:117] "RemoveContainer" containerID="fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4" Mar 20 08:25:59 crc kubenswrapper[5136]: E0320 08:25:59.853853 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4\": container with ID starting with fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4 not found: ID does not exist" containerID="fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.853886 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4"} err="failed to get container status \"fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4\": rpc error: code = NotFound desc = could not find container \"fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4\": container with ID starting with fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4 not found: ID does not exist" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.853906 5136 scope.go:117] "RemoveContainer" containerID="67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5" Mar 20 08:25:59 crc kubenswrapper[5136]: E0320 08:25:59.854157 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5\": container with ID starting with 67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5 not found: ID does not exist" containerID="67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.854193 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5"} err="failed to get container status \"67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5\": rpc error: code = NotFound desc = could not find container \"67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5\": container with ID starting with 67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5 not found: ID does not exist" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.151202 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566586-tq64l"] Mar 20 08:26:00 crc kubenswrapper[5136]: E0320 08:26:00.152299 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="registry-server" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.152341 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="registry-server" Mar 20 08:26:00 crc kubenswrapper[5136]: E0320 08:26:00.152392 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="extract-utilities" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.152414 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="extract-utilities" Mar 20 08:26:00 crc kubenswrapper[5136]: E0320 08:26:00.152453 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="extract-content" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.152474 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="extract-content" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.152899 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="registry-server" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.155747 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.157895 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.161282 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566586-tq64l"] Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.192059 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.192304 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.219145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64nrz\" (UniqueName: \"kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz\") pod \"auto-csr-approver-29566586-tq64l\" (UID: \"201194d4-8f03-49d4-bf30-d69ece3e6d30\") " pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.320918 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64nrz\" (UniqueName: \"kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz\") pod \"auto-csr-approver-29566586-tq64l\" (UID: \"201194d4-8f03-49d4-bf30-d69ece3e6d30\") " pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.337356 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64nrz\" (UniqueName: \"kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz\") pod \"auto-csr-approver-29566586-tq64l\" (UID: \"201194d4-8f03-49d4-bf30-d69ece3e6d30\") " pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.408845 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" path="/var/lib/kubelet/pods/86a4a5b4-ac25-409f-8ba9-e393aef21d43/volumes" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.527466 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.993488 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566586-tq64l"] Mar 20 08:26:01 crc kubenswrapper[5136]: I0320 08:26:01.791138 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566586-tq64l" event={"ID":"201194d4-8f03-49d4-bf30-d69ece3e6d30","Type":"ContainerStarted","Data":"52cf5df512fc383d0f94258aea02d8aa8ccd6adae53161d3e085276f511cdb34"} Mar 20 08:26:02 crc kubenswrapper[5136]: I0320 08:26:02.799329 5136 generic.go:334] "Generic (PLEG): container finished" podID="201194d4-8f03-49d4-bf30-d69ece3e6d30" containerID="53356b00d0884cc08ef3105861c0ae9d4bfaf917f6a2b9dfbe1bccff6dec5b55" exitCode=0 Mar 20 08:26:02 crc kubenswrapper[5136]: I0320 08:26:02.799373 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566586-tq64l" event={"ID":"201194d4-8f03-49d4-bf30-d69ece3e6d30","Type":"ContainerDied","Data":"53356b00d0884cc08ef3105861c0ae9d4bfaf917f6a2b9dfbe1bccff6dec5b55"} Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.056329 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.077158 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64nrz\" (UniqueName: \"kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz\") pod \"201194d4-8f03-49d4-bf30-d69ece3e6d30\" (UID: \"201194d4-8f03-49d4-bf30-d69ece3e6d30\") " Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.085663 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz" (OuterVolumeSpecName: "kube-api-access-64nrz") pod "201194d4-8f03-49d4-bf30-d69ece3e6d30" (UID: "201194d4-8f03-49d4-bf30-d69ece3e6d30"). InnerVolumeSpecName "kube-api-access-64nrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.177928 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64nrz\" (UniqueName: \"kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz\") on node \"crc\" DevicePath \"\"" Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.825780 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566586-tq64l" event={"ID":"201194d4-8f03-49d4-bf30-d69ece3e6d30","Type":"ContainerDied","Data":"52cf5df512fc383d0f94258aea02d8aa8ccd6adae53161d3e085276f511cdb34"} Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.826266 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52cf5df512fc383d0f94258aea02d8aa8ccd6adae53161d3e085276f511cdb34" Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.825925 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:05 crc kubenswrapper[5136]: I0320 08:26:05.121154 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566580-gxgs4"] Mar 20 08:26:05 crc kubenswrapper[5136]: I0320 08:26:05.125781 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566580-gxgs4"] Mar 20 08:26:06 crc kubenswrapper[5136]: I0320 08:26:06.407855 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c13adcfb-f420-46c9-bbde-3350b761780e" path="/var/lib/kubelet/pods/c13adcfb-f420-46c9-bbde-3350b761780e/volumes" Mar 20 08:26:37 crc kubenswrapper[5136]: I0320 08:26:37.015158 5136 scope.go:117] "RemoveContainer" containerID="49a3dc4dd1c8ed19d1dc39a8fdd22be54b526fe4de3c63ff0b1f4d1ea3e0a979" Mar 20 08:26:45 crc kubenswrapper[5136]: I0320 08:26:45.822085 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:26:45 crc kubenswrapper[5136]: I0320 08:26:45.822883 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:27:15 crc kubenswrapper[5136]: I0320 08:27:15.822053 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:27:15 crc kubenswrapper[5136]: I0320 08:27:15.822956 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:27:45 crc kubenswrapper[5136]: I0320 08:27:45.822245 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:27:45 crc kubenswrapper[5136]: I0320 08:27:45.822974 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:27:45 crc kubenswrapper[5136]: I0320 08:27:45.823046 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:27:45 crc kubenswrapper[5136]: I0320 08:27:45.824056 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:27:45 crc kubenswrapper[5136]: I0320 08:27:45.824147 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb" gracePeriod=600 Mar 20 08:27:46 crc kubenswrapper[5136]: I0320 08:27:46.650344 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb" exitCode=0 Mar 20 08:27:46 crc kubenswrapper[5136]: I0320 08:27:46.650423 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb"} Mar 20 08:27:46 crc kubenswrapper[5136]: I0320 08:27:46.651024 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5"} Mar 20 08:27:46 crc kubenswrapper[5136]: I0320 08:27:46.651050 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.152988 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566588-jjbzp"] Mar 20 08:28:00 crc kubenswrapper[5136]: E0320 08:28:00.154298 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201194d4-8f03-49d4-bf30-d69ece3e6d30" containerName="oc" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.154320 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="201194d4-8f03-49d4-bf30-d69ece3e6d30" containerName="oc" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.154546 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="201194d4-8f03-49d4-bf30-d69ece3e6d30" containerName="oc" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.155204 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.162279 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.162550 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.164135 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.177682 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566588-jjbzp"] Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.266542 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dstqz\" (UniqueName: \"kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz\") pod \"auto-csr-approver-29566588-jjbzp\" (UID: \"ca2b685a-cbe9-4989-87d2-09c8c1b3a846\") " pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.368957 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dstqz\" (UniqueName: \"kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz\") pod \"auto-csr-approver-29566588-jjbzp\" (UID: \"ca2b685a-cbe9-4989-87d2-09c8c1b3a846\") " pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.394064 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dstqz\" (UniqueName: \"kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz\") pod \"auto-csr-approver-29566588-jjbzp\" (UID: \"ca2b685a-cbe9-4989-87d2-09c8c1b3a846\") " pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.497167 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:01 crc kubenswrapper[5136]: I0320 08:28:01.006882 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566588-jjbzp"] Mar 20 08:28:01 crc kubenswrapper[5136]: W0320 08:28:01.011181 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca2b685a_cbe9_4989_87d2_09c8c1b3a846.slice/crio-6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313 WatchSource:0}: Error finding container 6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313: Status 404 returned error can't find the container with id 6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313 Mar 20 08:28:01 crc kubenswrapper[5136]: I0320 08:28:01.772884 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" event={"ID":"ca2b685a-cbe9-4989-87d2-09c8c1b3a846","Type":"ContainerStarted","Data":"6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313"} Mar 20 08:28:02 crc kubenswrapper[5136]: I0320 08:28:02.779906 5136 generic.go:334] "Generic (PLEG): container finished" podID="ca2b685a-cbe9-4989-87d2-09c8c1b3a846" containerID="0bdf2244928c50e418739f666f637d0c122d85d20e0278df3b68b937bca89d79" exitCode=0 Mar 20 08:28:02 crc kubenswrapper[5136]: I0320 08:28:02.779958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" event={"ID":"ca2b685a-cbe9-4989-87d2-09c8c1b3a846","Type":"ContainerDied","Data":"0bdf2244928c50e418739f666f637d0c122d85d20e0278df3b68b937bca89d79"} Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.102699 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.120917 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dstqz\" (UniqueName: \"kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz\") pod \"ca2b685a-cbe9-4989-87d2-09c8c1b3a846\" (UID: \"ca2b685a-cbe9-4989-87d2-09c8c1b3a846\") " Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.130033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz" (OuterVolumeSpecName: "kube-api-access-dstqz") pod "ca2b685a-cbe9-4989-87d2-09c8c1b3a846" (UID: "ca2b685a-cbe9-4989-87d2-09c8c1b3a846"). InnerVolumeSpecName "kube-api-access-dstqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.223371 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dstqz\" (UniqueName: \"kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz\") on node \"crc\" DevicePath \"\"" Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.802405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" event={"ID":"ca2b685a-cbe9-4989-87d2-09c8c1b3a846","Type":"ContainerDied","Data":"6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313"} Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.802444 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313" Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.802560 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:05 crc kubenswrapper[5136]: I0320 08:28:05.188010 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566582-wv2dn"] Mar 20 08:28:05 crc kubenswrapper[5136]: I0320 08:28:05.192337 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566582-wv2dn"] Mar 20 08:28:06 crc kubenswrapper[5136]: I0320 08:28:06.407261 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36fd17ca-2655-4388-807a-3740ab031402" path="/var/lib/kubelet/pods/36fd17ca-2655-4388-807a-3740ab031402/volumes" Mar 20 08:28:37 crc kubenswrapper[5136]: I0320 08:28:37.152889 5136 scope.go:117] "RemoveContainer" containerID="a529e171ce4e400e77421cdd13032062bdb0c3099972bc7c31cdc0391d1d0584" Mar 20 08:29:43 crc kubenswrapper[5136]: I0320 08:29:43.861167 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-jqplr"] Mar 20 08:29:43 crc kubenswrapper[5136]: I0320 08:29:43.869645 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-jqplr"] Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.009528 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-5cjkp"] Mar 20 08:29:44 crc kubenswrapper[5136]: E0320 08:29:44.009930 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2b685a-cbe9-4989-87d2-09c8c1b3a846" containerName="oc" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.009953 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2b685a-cbe9-4989-87d2-09c8c1b3a846" containerName="oc" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.010093 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2b685a-cbe9-4989-87d2-09c8c1b3a846" containerName="oc" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.010735 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.012939 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.013141 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.013438 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.013681 5136 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-65jln" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.014623 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-5cjkp"] Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.197171 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.197231 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8wpf\" (UniqueName: \"kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.197271 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.298996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8wpf\" (UniqueName: \"kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.299126 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.299314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.299786 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.299933 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.323841 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8wpf\" (UniqueName: \"kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.327602 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.409183 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" path="/var/lib/kubelet/pods/868b5502-6c3e-4e3b-bc43-c0875e71512f/volumes" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.798338 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-5cjkp"] Mar 20 08:29:45 crc kubenswrapper[5136]: I0320 08:29:45.604968 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5cjkp" event={"ID":"13b3d6a1-236a-4eec-8755-d6673a652114","Type":"ContainerStarted","Data":"bcf9c40c06591cfa7b6dd8c1bc0c60d668bc51fbc72a7b0277f617cbeb8adae7"} Mar 20 08:29:46 crc kubenswrapper[5136]: I0320 08:29:46.611832 5136 generic.go:334] "Generic (PLEG): container finished" podID="13b3d6a1-236a-4eec-8755-d6673a652114" containerID="a84d841fa14dbb7d163049ae2a42d3d241fc2e9ace22731699a4238f410674cb" exitCode=0 Mar 20 08:29:46 crc kubenswrapper[5136]: I0320 08:29:46.611871 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5cjkp" event={"ID":"13b3d6a1-236a-4eec-8755-d6673a652114","Type":"ContainerDied","Data":"a84d841fa14dbb7d163049ae2a42d3d241fc2e9ace22731699a4238f410674cb"} Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.025168 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.152985 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage\") pod \"13b3d6a1-236a-4eec-8755-d6673a652114\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.153295 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt\") pod \"13b3d6a1-236a-4eec-8755-d6673a652114\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.153385 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8wpf\" (UniqueName: \"kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf\") pod \"13b3d6a1-236a-4eec-8755-d6673a652114\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.153399 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "13b3d6a1-236a-4eec-8755-d6673a652114" (UID: "13b3d6a1-236a-4eec-8755-d6673a652114"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.153713 5136 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.159157 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf" (OuterVolumeSpecName: "kube-api-access-m8wpf") pod "13b3d6a1-236a-4eec-8755-d6673a652114" (UID: "13b3d6a1-236a-4eec-8755-d6673a652114"). InnerVolumeSpecName "kube-api-access-m8wpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.178653 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "13b3d6a1-236a-4eec-8755-d6673a652114" (UID: "13b3d6a1-236a-4eec-8755-d6673a652114"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.254728 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8wpf\" (UniqueName: \"kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.254754 5136 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.628885 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5cjkp" event={"ID":"13b3d6a1-236a-4eec-8755-d6673a652114","Type":"ContainerDied","Data":"bcf9c40c06591cfa7b6dd8c1bc0c60d668bc51fbc72a7b0277f617cbeb8adae7"} Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.628919 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcf9c40c06591cfa7b6dd8c1bc0c60d668bc51fbc72a7b0277f617cbeb8adae7" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.628928 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.277017 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-5cjkp"] Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.282372 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-5cjkp"] Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.409543 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b3d6a1-236a-4eec-8755-d6673a652114" path="/var/lib/kubelet/pods/13b3d6a1-236a-4eec-8755-d6673a652114/volumes" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.410379 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-m5j8k"] Mar 20 08:29:50 crc kubenswrapper[5136]: E0320 08:29:50.410785 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b3d6a1-236a-4eec-8755-d6673a652114" containerName="storage" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.410852 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b3d6a1-236a-4eec-8755-d6673a652114" containerName="storage" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.411133 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b3d6a1-236a-4eec-8755-d6673a652114" containerName="storage" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.412943 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.415474 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-m5j8k"] Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.415784 5136 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-65jln" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.416021 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.416930 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.417335 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.586012 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94bmv\" (UniqueName: \"kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.586110 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.586236 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.687410 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.687481 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94bmv\" (UniqueName: \"kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.687518 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.687656 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.688287 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.711912 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94bmv\" (UniqueName: \"kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.730327 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:51 crc kubenswrapper[5136]: I0320 08:29:51.163337 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-m5j8k"] Mar 20 08:29:51 crc kubenswrapper[5136]: I0320 08:29:51.648450 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-m5j8k" event={"ID":"b15f2d65-52cd-4e08-b35d-63b4e2f7559c","Type":"ContainerStarted","Data":"148a4ca422766dccfdc07a7b1f7ac69ec65bdc456d2ba035b2dae2a739a96f6e"} Mar 20 08:29:52 crc kubenswrapper[5136]: I0320 08:29:52.656786 5136 generic.go:334] "Generic (PLEG): container finished" podID="b15f2d65-52cd-4e08-b35d-63b4e2f7559c" containerID="57c12b11e582d6b79221d78f58da4f5e7fc7894223d64657ac01cb8df0d9ebf6" exitCode=0 Mar 20 08:29:52 crc kubenswrapper[5136]: I0320 08:29:52.656833 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-m5j8k" event={"ID":"b15f2d65-52cd-4e08-b35d-63b4e2f7559c","Type":"ContainerDied","Data":"57c12b11e582d6b79221d78f58da4f5e7fc7894223d64657ac01cb8df0d9ebf6"} Mar 20 08:29:53 crc kubenswrapper[5136]: I0320 08:29:53.978467 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.138725 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt\") pod \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.138798 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94bmv\" (UniqueName: \"kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv\") pod \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.138951 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b15f2d65-52cd-4e08-b35d-63b4e2f7559c" (UID: "b15f2d65-52cd-4e08-b35d-63b4e2f7559c"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.138982 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage\") pod \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.139388 5136 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.144721 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv" (OuterVolumeSpecName: "kube-api-access-94bmv") pod "b15f2d65-52cd-4e08-b35d-63b4e2f7559c" (UID: "b15f2d65-52cd-4e08-b35d-63b4e2f7559c"). InnerVolumeSpecName "kube-api-access-94bmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.156659 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b15f2d65-52cd-4e08-b35d-63b4e2f7559c" (UID: "b15f2d65-52cd-4e08-b35d-63b4e2f7559c"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.240454 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94bmv\" (UniqueName: \"kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.240495 5136 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.674112 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-m5j8k" event={"ID":"b15f2d65-52cd-4e08-b35d-63b4e2f7559c","Type":"ContainerDied","Data":"148a4ca422766dccfdc07a7b1f7ac69ec65bdc456d2ba035b2dae2a739a96f6e"} Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.674166 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="148a4ca422766dccfdc07a7b1f7ac69ec65bdc456d2ba035b2dae2a739a96f6e" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.674211 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.159532 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566590-9pt5f"] Mar 20 08:30:00 crc kubenswrapper[5136]: E0320 08:30:00.160577 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15f2d65-52cd-4e08-b35d-63b4e2f7559c" containerName="storage" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.160601 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f2d65-52cd-4e08-b35d-63b4e2f7559c" containerName="storage" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.160975 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15f2d65-52cd-4e08-b35d-63b4e2f7559c" containerName="storage" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.161680 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.165761 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.166163 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.166335 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.173892 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh"] Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.175181 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.177284 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.177505 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.190088 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566590-9pt5f"] Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.209043 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh"] Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.329516 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rfpr\" (UniqueName: \"kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.329625 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjghn\" (UniqueName: \"kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn\") pod \"auto-csr-approver-29566590-9pt5f\" (UID: \"51c8efa5-d30c-4426-ad6e-4aa0880c0563\") " pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.330073 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.330122 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.432227 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.432439 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.432592 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rfpr\" (UniqueName: \"kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.432713 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjghn\" (UniqueName: \"kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn\") pod \"auto-csr-approver-29566590-9pt5f\" (UID: \"51c8efa5-d30c-4426-ad6e-4aa0880c0563\") " pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.433797 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.444974 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.452694 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rfpr\" (UniqueName: \"kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.454524 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjghn\" (UniqueName: \"kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn\") pod \"auto-csr-approver-29566590-9pt5f\" (UID: \"51c8efa5-d30c-4426-ad6e-4aa0880c0563\") " pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.495490 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.505454 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.980805 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566590-9pt5f"] Mar 20 08:30:01 crc kubenswrapper[5136]: I0320 08:30:01.039956 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh"] Mar 20 08:30:01 crc kubenswrapper[5136]: W0320 08:30:01.043735 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda095f941_55a7_43b1_b794_f5f9d3c1cc97.slice/crio-6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0 WatchSource:0}: Error finding container 6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0: Status 404 returned error can't find the container with id 6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0 Mar 20 08:30:01 crc kubenswrapper[5136]: I0320 08:30:01.733859 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" event={"ID":"51c8efa5-d30c-4426-ad6e-4aa0880c0563","Type":"ContainerStarted","Data":"85efc13d83e2f9a1344594bc7b808fcfb8fd76ffae6c969416730d2de9aeaeb7"} Mar 20 08:30:01 crc kubenswrapper[5136]: I0320 08:30:01.735932 5136 generic.go:334] "Generic (PLEG): container finished" podID="a095f941-55a7-43b1-b794-f5f9d3c1cc97" containerID="e8e14d68b90f3a95efeb8ef9a4f1e50d1c91dec543afa6a9b52e138a774c4cde" exitCode=0 Mar 20 08:30:01 crc kubenswrapper[5136]: I0320 08:30:01.735958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" event={"ID":"a095f941-55a7-43b1-b794-f5f9d3c1cc97","Type":"ContainerDied","Data":"e8e14d68b90f3a95efeb8ef9a4f1e50d1c91dec543afa6a9b52e138a774c4cde"} Mar 20 08:30:01 crc kubenswrapper[5136]: I0320 08:30:01.735977 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" event={"ID":"a095f941-55a7-43b1-b794-f5f9d3c1cc97","Type":"ContainerStarted","Data":"6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0"} Mar 20 08:30:02 crc kubenswrapper[5136]: I0320 08:30:02.743067 5136 generic.go:334] "Generic (PLEG): container finished" podID="51c8efa5-d30c-4426-ad6e-4aa0880c0563" containerID="d67dfe1060ac0ac0db1818a3ab60ffceda0123c6ffe3b59b89e0430a3ae809a2" exitCode=0 Mar 20 08:30:02 crc kubenswrapper[5136]: I0320 08:30:02.743380 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" event={"ID":"51c8efa5-d30c-4426-ad6e-4aa0880c0563","Type":"ContainerDied","Data":"d67dfe1060ac0ac0db1818a3ab60ffceda0123c6ffe3b59b89e0430a3ae809a2"} Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.001669 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.170404 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume\") pod \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.170555 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume\") pod \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.170627 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rfpr\" (UniqueName: \"kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr\") pod \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.171284 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume" (OuterVolumeSpecName: "config-volume") pod "a095f941-55a7-43b1-b794-f5f9d3c1cc97" (UID: "a095f941-55a7-43b1-b794-f5f9d3c1cc97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.176399 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr" (OuterVolumeSpecName: "kube-api-access-4rfpr") pod "a095f941-55a7-43b1-b794-f5f9d3c1cc97" (UID: "a095f941-55a7-43b1-b794-f5f9d3c1cc97"). InnerVolumeSpecName "kube-api-access-4rfpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.177656 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a095f941-55a7-43b1-b794-f5f9d3c1cc97" (UID: "a095f941-55a7-43b1-b794-f5f9d3c1cc97"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.272169 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.272216 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.272229 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rfpr\" (UniqueName: \"kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr\") on node \"crc\" DevicePath \"\"" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.756284 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.756995 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" event={"ID":"a095f941-55a7-43b1-b794-f5f9d3c1cc97","Type":"ContainerDied","Data":"6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0"} Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.757046 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.079516 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7"] Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.087624 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7"] Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.095366 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.284125 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjghn\" (UniqueName: \"kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn\") pod \"51c8efa5-d30c-4426-ad6e-4aa0880c0563\" (UID: \"51c8efa5-d30c-4426-ad6e-4aa0880c0563\") " Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.289036 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn" (OuterVolumeSpecName: "kube-api-access-pjghn") pod "51c8efa5-d30c-4426-ad6e-4aa0880c0563" (UID: "51c8efa5-d30c-4426-ad6e-4aa0880c0563"). InnerVolumeSpecName "kube-api-access-pjghn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.385373 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjghn\" (UniqueName: \"kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn\") on node \"crc\" DevicePath \"\"" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.406536 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb32e01f-d49f-4ba1-a1d4-c693765737e7" path="/var/lib/kubelet/pods/eb32e01f-d49f-4ba1-a1d4-c693765737e7/volumes" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.763179 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" event={"ID":"51c8efa5-d30c-4426-ad6e-4aa0880c0563","Type":"ContainerDied","Data":"85efc13d83e2f9a1344594bc7b808fcfb8fd76ffae6c969416730d2de9aeaeb7"} Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.763591 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85efc13d83e2f9a1344594bc7b808fcfb8fd76ffae6c969416730d2de9aeaeb7" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.763265 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:05 crc kubenswrapper[5136]: I0320 08:30:05.152849 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566584-ht4pj"] Mar 20 08:30:05 crc kubenswrapper[5136]: I0320 08:30:05.159496 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566584-ht4pj"] Mar 20 08:30:06 crc kubenswrapper[5136]: I0320 08:30:06.409013 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" path="/var/lib/kubelet/pods/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2/volumes" Mar 20 08:30:15 crc kubenswrapper[5136]: I0320 08:30:15.822429 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:30:15 crc kubenswrapper[5136]: I0320 08:30:15.823324 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:30:37 crc kubenswrapper[5136]: I0320 08:30:37.236567 5136 scope.go:117] "RemoveContainer" containerID="db23fd78398ebb125a153768bba0437d8fa09615fe8803585f26e9cdf330d2a9" Mar 20 08:30:37 crc kubenswrapper[5136]: I0320 08:30:37.284892 5136 scope.go:117] "RemoveContainer" containerID="b6e56033203d796df41b39eddfb04e55cd2822f9ba7f0e9edd26141d7d5d92b3" Mar 20 08:30:37 crc kubenswrapper[5136]: I0320 08:30:37.361361 5136 scope.go:117] "RemoveContainer" containerID="1634bfed9d3426f391a9ba220363e60d18b7a13e0b5dd7787df7f812b3c4e0ea" Mar 20 08:30:45 crc kubenswrapper[5136]: I0320 08:30:45.822397 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:30:45 crc kubenswrapper[5136]: I0320 08:30:45.823133 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:31:15 crc kubenswrapper[5136]: I0320 08:31:15.822493 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:31:15 crc kubenswrapper[5136]: I0320 08:31:15.823224 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:31:15 crc kubenswrapper[5136]: I0320 08:31:15.823329 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:31:15 crc kubenswrapper[5136]: I0320 08:31:15.824175 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:31:15 crc kubenswrapper[5136]: I0320 08:31:15.824256 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" gracePeriod=600 Mar 20 08:31:15 crc kubenswrapper[5136]: E0320 08:31:15.956600 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:31:16 crc kubenswrapper[5136]: I0320 08:31:16.354638 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" exitCode=0 Mar 20 08:31:16 crc kubenswrapper[5136]: I0320 08:31:16.354687 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5"} Mar 20 08:31:16 crc kubenswrapper[5136]: I0320 08:31:16.354753 5136 scope.go:117] "RemoveContainer" containerID="1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb" Mar 20 08:31:16 crc kubenswrapper[5136]: I0320 08:31:16.355571 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:31:16 crc kubenswrapper[5136]: E0320 08:31:16.356441 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:31:28 crc kubenswrapper[5136]: I0320 08:31:28.405262 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:31:28 crc kubenswrapper[5136]: E0320 08:31:28.406590 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:31:41 crc kubenswrapper[5136]: I0320 08:31:41.397519 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:31:41 crc kubenswrapper[5136]: E0320 08:31:41.398651 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:31:56 crc kubenswrapper[5136]: I0320 08:31:56.396450 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:31:56 crc kubenswrapper[5136]: E0320 08:31:56.397538 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.180164 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566592-gnh7d"] Mar 20 08:32:00 crc kubenswrapper[5136]: E0320 08:32:00.180454 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c8efa5-d30c-4426-ad6e-4aa0880c0563" containerName="oc" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.180466 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c8efa5-d30c-4426-ad6e-4aa0880c0563" containerName="oc" Mar 20 08:32:00 crc kubenswrapper[5136]: E0320 08:32:00.180489 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a095f941-55a7-43b1-b794-f5f9d3c1cc97" containerName="collect-profiles" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.180496 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a095f941-55a7-43b1-b794-f5f9d3c1cc97" containerName="collect-profiles" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.180610 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a095f941-55a7-43b1-b794-f5f9d3c1cc97" containerName="collect-profiles" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.180623 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c8efa5-d30c-4426-ad6e-4aa0880c0563" containerName="oc" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.181033 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.183940 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.183943 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.184215 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.198452 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566592-gnh7d"] Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.278624 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz7sk\" (UniqueName: \"kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk\") pod \"auto-csr-approver-29566592-gnh7d\" (UID: \"2e2af690-159e-4938-b0b0-35e042cc8393\") " pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.380582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz7sk\" (UniqueName: \"kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk\") pod \"auto-csr-approver-29566592-gnh7d\" (UID: \"2e2af690-159e-4938-b0b0-35e042cc8393\") " pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.403900 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz7sk\" (UniqueName: \"kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk\") pod \"auto-csr-approver-29566592-gnh7d\" (UID: \"2e2af690-159e-4938-b0b0-35e042cc8393\") " pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.499497 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.781540 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566592-gnh7d"] Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.795238 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:32:01 crc kubenswrapper[5136]: I0320 08:32:01.727326 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" event={"ID":"2e2af690-159e-4938-b0b0-35e042cc8393","Type":"ContainerStarted","Data":"2e337ec26bbbf76ef0d66c40704a9d16485d50d4b23704445de55eadafddf622"} Mar 20 08:32:02 crc kubenswrapper[5136]: I0320 08:32:02.734943 5136 generic.go:334] "Generic (PLEG): container finished" podID="2e2af690-159e-4938-b0b0-35e042cc8393" containerID="6f1e73339774fdb849b7c14ca46c4e23637ecc11d975480f8593fb668065f9a0" exitCode=0 Mar 20 08:32:02 crc kubenswrapper[5136]: I0320 08:32:02.735006 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" event={"ID":"2e2af690-159e-4938-b0b0-35e042cc8393","Type":"ContainerDied","Data":"6f1e73339774fdb849b7c14ca46c4e23637ecc11d975480f8593fb668065f9a0"} Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.058427 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.141198 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz7sk\" (UniqueName: \"kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk\") pod \"2e2af690-159e-4938-b0b0-35e042cc8393\" (UID: \"2e2af690-159e-4938-b0b0-35e042cc8393\") " Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.147952 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk" (OuterVolumeSpecName: "kube-api-access-cz7sk") pod "2e2af690-159e-4938-b0b0-35e042cc8393" (UID: "2e2af690-159e-4938-b0b0-35e042cc8393"). InnerVolumeSpecName "kube-api-access-cz7sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.243039 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz7sk\" (UniqueName: \"kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.621943 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:04 crc kubenswrapper[5136]: E0320 08:32:04.622510 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2af690-159e-4938-b0b0-35e042cc8393" containerName="oc" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.622528 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2af690-159e-4938-b0b0-35e042cc8393" containerName="oc" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.622657 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2af690-159e-4938-b0b0-35e042cc8393" containerName="oc" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.623342 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.624785 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.625067 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.625068 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.633737 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.635220 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.637689 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.637751 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-p8dqf" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.641922 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.654193 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.654607 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.654792 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm25t\" (UniqueName: \"kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.655083 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zw7\" (UniqueName: \"kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.655362 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.673160 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.748860 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" event={"ID":"2e2af690-159e-4938-b0b0-35e042cc8393","Type":"ContainerDied","Data":"2e337ec26bbbf76ef0d66c40704a9d16485d50d4b23704445de55eadafddf622"} Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.748895 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e337ec26bbbf76ef0d66c40704a9d16485d50d4b23704445de55eadafddf622" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.748923 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.756382 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.756447 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.756464 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.756484 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm25t\" (UniqueName: \"kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.756530 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9zw7\" (UniqueName: \"kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.757587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.757618 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.757765 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.774505 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm25t\" (UniqueName: \"kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.786884 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9zw7\" (UniqueName: \"kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.944392 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.953047 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.147637 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566586-tq64l"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.156232 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566586-tq64l"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.213880 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.388840 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.437289 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.439433 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.449405 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.470134 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czkhl\" (UniqueName: \"kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.470173 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.470212 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.553762 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.571265 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.571377 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czkhl\" (UniqueName: \"kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.571399 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.572419 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.572429 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.613418 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czkhl\" (UniqueName: \"kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.725189 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.751373 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.752897 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.760850 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.761917 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.774237 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.774321 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.774419 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6w6c\" (UniqueName: \"kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.782975 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" event={"ID":"e4a91420-177b-479a-aeb6-0fdc31a375e7","Type":"ContainerStarted","Data":"ed5c7fa75820fbac3dc8454615e81f7d5004c1cabef4298a4f0b3764020471ee"} Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.786190 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" event={"ID":"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0","Type":"ContainerStarted","Data":"4912bc2f2e88dde7c6725f660921ca75fb40083a65fdbf07840e1559e1bb6656"} Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.879344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6w6c\" (UniqueName: \"kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.879435 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.879453 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.881455 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.882929 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.912880 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6w6c\" (UniqueName: \"kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.077896 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.353210 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.424519 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="201194d4-8f03-49d4-bf30-d69ece3e6d30" path="/var/lib/kubelet/pods/201194d4-8f03-49d4-bf30-d69ece3e6d30/volumes" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.611631 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.613024 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.614437 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hxrcr" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.614898 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.615003 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.615152 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.615283 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.615366 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.615670 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.630372 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.696533 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700155 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700188 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700222 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700237 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700255 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700497 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700551 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5rbp\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700589 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700630 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700722 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700756 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.797793 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" event={"ID":"da7b3de9-906c-4470-9b45-498268d7161b","Type":"ContainerStarted","Data":"b875bc351439176b0ec46aee9af86021a8df8774ef7b812887178dc8835e12d7"} Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.799793 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" event={"ID":"3045f340-8dd6-4a70-8407-ca021577d30c","Type":"ContainerStarted","Data":"4aa6b047cacc9f7c7b7b22f5ea520282914082a02383d0fe88b6a663fa015092"} Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801573 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801605 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801646 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801670 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801686 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801755 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801771 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5rbp\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801808 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801854 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801871 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.802846 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.803122 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.804544 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.804658 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.805623 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.809422 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.809621 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.809649 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ffa5bed1f53993894ec26cbc3fe1cf1f67f60a4766508e053a6d4d74251ebc8b/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.811401 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.812190 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.818316 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.822272 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5rbp\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.842254 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.883294 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.884341 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887111 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887249 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887294 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887436 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887502 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rxs59" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887579 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887695 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.898387 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903001 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903074 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903239 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903293 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903325 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903418 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903488 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ffp\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903559 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903617 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903714 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903794 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.977063 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004824 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004876 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004894 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004913 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004931 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004948 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ffp\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004973 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004994 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.005024 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.005050 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.005075 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.006252 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.006522 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.008301 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.008909 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.009627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.010095 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.010119 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d0b8d36279754dae866b74592d574f198fefe86644de71828fcab427244d57e1/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.011123 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.011215 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.013904 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.014985 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.023549 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ffp\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.075102 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.207507 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.337422 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.341448 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.345546 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.350077 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.350081 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.350198 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vj2gs" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.352139 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.370601 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.421554 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:32:07 crc kubenswrapper[5136]: W0320 08:32:07.450496 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49b925de_d698_4589_9f71_cf485dd617d2.slice/crio-e183efadd223541a56fac8eea2671471e01e5264c953e53547b41494176cef67 WatchSource:0}: Error finding container e183efadd223541a56fac8eea2671471e01e5264c953e53547b41494176cef67: Status 404 returned error can't find the container with id e183efadd223541a56fac8eea2671471e01e5264c953e53547b41494176cef67 Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512470 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512607 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512679 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512726 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512767 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trz2t\" (UniqueName: \"kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512793 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512842 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512933 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616452 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616512 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616544 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616591 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616629 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616651 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trz2t\" (UniqueName: \"kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616667 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616692 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.617093 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.618634 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.619497 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.620186 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.621937 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.621969 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1dd8871675ece06a49a0475a0d4042d4a0827aefbbf3be5b2f22e999c485a8b1/globalmount\"" pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.623670 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.624558 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.632998 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trz2t\" (UniqueName: \"kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.680932 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.694243 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.876004 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerStarted","Data":"7392b8c85d117a71e5c4a2c47ce52f8f48e947a5928969391300b523dfc80f5d"} Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.878608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerStarted","Data":"e183efadd223541a56fac8eea2671471e01e5264c953e53547b41494176cef67"} Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.965131 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.408994 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:32:08 crc kubenswrapper[5136]: E0320 08:32:08.409569 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.425303 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.886738 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerStarted","Data":"a3ca3c82737ff9666b59eaa26c9fcedbcaf8829fd4670afceb4988d0c1b4a157"} Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.951484 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.953004 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.956211 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.957428 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.958146 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mtswd" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.959531 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.963557 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.146787 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.146877 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.146912 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.147155 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.147211 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.147249 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.147278 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mxw\" (UniqueName: \"kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.147306 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.248409 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.251637 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.251665 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.251691 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mxw\" (UniqueName: \"kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.251720 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.251784 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.252028 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.252073 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.252914 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.250099 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.253432 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.256176 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.256211 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a5c355bf1a7209505d65ff14ef56fc7e9b635a23bbc586188890ca98ac6ccf4c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.256825 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.264384 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.265054 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.304252 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.304987 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mxw\" (UniqueName: \"kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.306048 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.312292 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6n4pc" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.312936 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.313391 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.316285 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.332760 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.346857 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.468871 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.469079 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.469249 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.469286 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xz48\" (UniqueName: \"kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.469394 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.571293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.572404 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.572880 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.574001 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.574301 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.574456 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.574491 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xz48\" (UniqueName: \"kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.576986 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.577495 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.596608 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xz48\" (UniqueName: \"kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.678772 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.876123 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 08:32:10 crc kubenswrapper[5136]: I0320 08:32:10.183599 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 20 08:32:10 crc kubenswrapper[5136]: I0320 08:32:10.908405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6fcd7752-be4a-45af-b12d-f4ee6275b3b3","Type":"ContainerStarted","Data":"9cf9cadd89e2b28a829e6e81692bf2693c40f2c59fbdfc4c88536b7ae65a16d3"} Mar 20 08:32:10 crc kubenswrapper[5136]: I0320 08:32:10.912343 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerStarted","Data":"8744afbb6fc5b78de44cc1ad3a2d2c06bbc7e574d3d24b9b63a1c4c9c4199a2b"} Mar 20 08:32:23 crc kubenswrapper[5136]: I0320 08:32:23.396544 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:32:23 crc kubenswrapper[5136]: E0320 08:32:23.397292 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:32:33 crc kubenswrapper[5136]: E0320 08:32:33.266403 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef" Mar 20 08:32:33 crc kubenswrapper[5136]: E0320 08:32:33.266726 5136 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef" Mar 20 08:32:33 crc kubenswrapper[5136]: E0320 08:32:33.266850 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czkhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7f7c7bdb8c-sdq9q_openstack(3045f340-8dd6-4a70-8407-ca021577d30c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 08:32:33 crc kubenswrapper[5136]: E0320 08:32:33.267982 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" podUID="3045f340-8dd6-4a70-8407-ca021577d30c" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.104464 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef\\\"\"" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" podUID="3045f340-8dd6-4a70-8407-ca021577d30c" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.241395 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.241449 5136 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.242056 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n647h57bh695h68dh54fhf5hc5h67h5d4hb6h696h685h54ch6h599h5c5h679h74h689h644h5c8h64ch555h5c6h5dh569h698h59fh66ch57bh5b9hb7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm25t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6648865bb9-n4kmb_openstack(2c42ef9c-e931-4771-9d5f-3fa6b2a851c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.243914 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" podUID="2c42ef9c-e931-4771-9d5f-3fa6b2a851c0" Mar 20 08:32:34 crc kubenswrapper[5136]: I0320 08:32:34.396759 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.397032 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.110202 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6fcd7752-be4a-45af-b12d-f4ee6275b3b3","Type":"ContainerStarted","Data":"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53"} Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.110525 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.112706 5136 generic.go:334] "Generic (PLEG): container finished" podID="e4a91420-177b-479a-aeb6-0fdc31a375e7" containerID="5f4f3dcd0f729e1778bba488d0db6bc5470314dde01c438d740b106a4b7b2bc2" exitCode=0 Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.112772 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" event={"ID":"e4a91420-177b-479a-aeb6-0fdc31a375e7","Type":"ContainerDied","Data":"5f4f3dcd0f729e1778bba488d0db6bc5470314dde01c438d740b106a4b7b2bc2"} Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.116147 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerStarted","Data":"249161d201c86c824c827618ff63c208e5d4f7836f10cac1b975be51340fe2bc"} Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.118129 5136 generic.go:334] "Generic (PLEG): container finished" podID="da7b3de9-906c-4470-9b45-498268d7161b" containerID="7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0" exitCode=0 Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.118189 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" event={"ID":"da7b3de9-906c-4470-9b45-498268d7161b","Type":"ContainerDied","Data":"7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0"} Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.120239 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerStarted","Data":"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77"} Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.128479 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.222406253 podStartE2EDuration="26.128465123s" podCreationTimestamp="2026-03-20 08:32:09 +0000 UTC" firstStartedPulling="2026-03-20 08:32:10.211203214 +0000 UTC m=+6162.470514365" lastFinishedPulling="2026-03-20 08:32:34.117262054 +0000 UTC m=+6186.376573235" observedRunningTime="2026-03-20 08:32:35.12838611 +0000 UTC m=+6187.387697291" watchObservedRunningTime="2026-03-20 08:32:35.128465123 +0000 UTC m=+6187.387776274" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.463029 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.467647 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.561880 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config\") pod \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.561946 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc\") pod \"e4a91420-177b-479a-aeb6-0fdc31a375e7\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.561969 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm25t\" (UniqueName: \"kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t\") pod \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.561986 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config\") pod \"e4a91420-177b-479a-aeb6-0fdc31a375e7\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.562065 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9zw7\" (UniqueName: \"kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7\") pod \"e4a91420-177b-479a-aeb6-0fdc31a375e7\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.562637 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config" (OuterVolumeSpecName: "config") pod "2c42ef9c-e931-4771-9d5f-3fa6b2a851c0" (UID: "2c42ef9c-e931-4771-9d5f-3fa6b2a851c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.563156 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.568143 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t" (OuterVolumeSpecName: "kube-api-access-zm25t") pod "2c42ef9c-e931-4771-9d5f-3fa6b2a851c0" (UID: "2c42ef9c-e931-4771-9d5f-3fa6b2a851c0"). InnerVolumeSpecName "kube-api-access-zm25t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.569140 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7" (OuterVolumeSpecName: "kube-api-access-l9zw7") pod "e4a91420-177b-479a-aeb6-0fdc31a375e7" (UID: "e4a91420-177b-479a-aeb6-0fdc31a375e7"). InnerVolumeSpecName "kube-api-access-l9zw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.580046 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4a91420-177b-479a-aeb6-0fdc31a375e7" (UID: "e4a91420-177b-479a-aeb6-0fdc31a375e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.580538 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config" (OuterVolumeSpecName: "config") pod "e4a91420-177b-479a-aeb6-0fdc31a375e7" (UID: "e4a91420-177b-479a-aeb6-0fdc31a375e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.664675 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9zw7\" (UniqueName: \"kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.665003 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.665017 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm25t\" (UniqueName: \"kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.665029 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.128442 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerStarted","Data":"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f"} Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.129843 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" event={"ID":"e4a91420-177b-479a-aeb6-0fdc31a375e7","Type":"ContainerDied","Data":"ed5c7fa75820fbac3dc8454615e81f7d5004c1cabef4298a4f0b3764020471ee"} Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.129881 5136 scope.go:117] "RemoveContainer" containerID="5f4f3dcd0f729e1778bba488d0db6bc5470314dde01c438d740b106a4b7b2bc2" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.129885 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.131244 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerStarted","Data":"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e"} Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.133669 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" event={"ID":"da7b3de9-906c-4470-9b45-498268d7161b","Type":"ContainerStarted","Data":"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d"} Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.134152 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.135470 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" event={"ID":"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0","Type":"ContainerDied","Data":"4912bc2f2e88dde7c6725f660921ca75fb40083a65fdbf07840e1559e1bb6656"} Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.135593 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.190086 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" podStartSLOduration=3.598541481 podStartE2EDuration="31.190059567s" podCreationTimestamp="2026-03-20 08:32:05 +0000 UTC" firstStartedPulling="2026-03-20 08:32:06.708989284 +0000 UTC m=+6158.968300435" lastFinishedPulling="2026-03-20 08:32:34.30050737 +0000 UTC m=+6186.559818521" observedRunningTime="2026-03-20 08:32:36.178901573 +0000 UTC m=+6188.438212734" watchObservedRunningTime="2026-03-20 08:32:36.190059567 +0000 UTC m=+6188.449370718" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.251838 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.258709 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.292234 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.301749 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.406570 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c42ef9c-e931-4771-9d5f-3fa6b2a851c0" path="/var/lib/kubelet/pods/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0/volumes" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.406943 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a91420-177b-479a-aeb6-0fdc31a375e7" path="/var/lib/kubelet/pods/e4a91420-177b-479a-aeb6-0fdc31a375e7/volumes" Mar 20 08:32:37 crc kubenswrapper[5136]: I0320 08:32:37.461877 5136 scope.go:117] "RemoveContainer" containerID="53356b00d0884cc08ef3105861c0ae9d4bfaf917f6a2b9dfbe1bccff6dec5b55" Mar 20 08:32:39 crc kubenswrapper[5136]: I0320 08:32:39.168271 5136 generic.go:334] "Generic (PLEG): container finished" podID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerID="3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77" exitCode=0 Mar 20 08:32:39 crc kubenswrapper[5136]: I0320 08:32:39.168372 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerDied","Data":"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77"} Mar 20 08:32:39 crc kubenswrapper[5136]: I0320 08:32:39.175744 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerID="249161d201c86c824c827618ff63c208e5d4f7836f10cac1b975be51340fe2bc" exitCode=0 Mar 20 08:32:39 crc kubenswrapper[5136]: I0320 08:32:39.175782 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerDied","Data":"249161d201c86c824c827618ff63c208e5d4f7836f10cac1b975be51340fe2bc"} Mar 20 08:32:39 crc kubenswrapper[5136]: I0320 08:32:39.683190 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 20 08:32:40 crc kubenswrapper[5136]: I0320 08:32:40.189871 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerStarted","Data":"a2cd799ad38f20f3a20df188a90ca9d10f639dafb3f002a582a1fe8b8331c153"} Mar 20 08:32:40 crc kubenswrapper[5136]: I0320 08:32:40.192214 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerStarted","Data":"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe"} Mar 20 08:32:40 crc kubenswrapper[5136]: I0320 08:32:40.211571 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.838512441 podStartE2EDuration="33.211553273s" podCreationTimestamp="2026-03-20 08:32:07 +0000 UTC" firstStartedPulling="2026-03-20 08:32:09.90032936 +0000 UTC m=+6162.159640511" lastFinishedPulling="2026-03-20 08:32:34.273370192 +0000 UTC m=+6186.532681343" observedRunningTime="2026-03-20 08:32:40.206234409 +0000 UTC m=+6192.465545560" watchObservedRunningTime="2026-03-20 08:32:40.211553273 +0000 UTC m=+6192.470864424" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.080031 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.110158 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.308151123 podStartE2EDuration="35.110125557s" podCreationTimestamp="2026-03-20 08:32:06 +0000 UTC" firstStartedPulling="2026-03-20 08:32:08.444282651 +0000 UTC m=+6160.703593802" lastFinishedPulling="2026-03-20 08:32:34.246257085 +0000 UTC m=+6186.505568236" observedRunningTime="2026-03-20 08:32:40.236211955 +0000 UTC m=+6192.495523106" watchObservedRunningTime="2026-03-20 08:32:41.110125557 +0000 UTC m=+6193.369436718" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.139751 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.409918 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.458286 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config\") pod \"3045f340-8dd6-4a70-8407-ca021577d30c\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.458521 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc\") pod \"3045f340-8dd6-4a70-8407-ca021577d30c\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.458668 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czkhl\" (UniqueName: \"kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl\") pod \"3045f340-8dd6-4a70-8407-ca021577d30c\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.458842 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config" (OuterVolumeSpecName: "config") pod "3045f340-8dd6-4a70-8407-ca021577d30c" (UID: "3045f340-8dd6-4a70-8407-ca021577d30c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.458947 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3045f340-8dd6-4a70-8407-ca021577d30c" (UID: "3045f340-8dd6-4a70-8407-ca021577d30c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.460350 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.460388 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.477139 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl" (OuterVolumeSpecName: "kube-api-access-czkhl") pod "3045f340-8dd6-4a70-8407-ca021577d30c" (UID: "3045f340-8dd6-4a70-8407-ca021577d30c"). InnerVolumeSpecName "kube-api-access-czkhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.562907 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czkhl\" (UniqueName: \"kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:42 crc kubenswrapper[5136]: I0320 08:32:42.213211 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" event={"ID":"3045f340-8dd6-4a70-8407-ca021577d30c","Type":"ContainerDied","Data":"4aa6b047cacc9f7c7b7b22f5ea520282914082a02383d0fe88b6a663fa015092"} Mar 20 08:32:42 crc kubenswrapper[5136]: I0320 08:32:42.213624 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:42 crc kubenswrapper[5136]: I0320 08:32:42.293236 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:42 crc kubenswrapper[5136]: I0320 08:32:42.304208 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:42 crc kubenswrapper[5136]: I0320 08:32:42.404871 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3045f340-8dd6-4a70-8407-ca021577d30c" path="/var/lib/kubelet/pods/3045f340-8dd6-4a70-8407-ca021577d30c/volumes" Mar 20 08:32:47 crc kubenswrapper[5136]: I0320 08:32:47.397117 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:32:47 crc kubenswrapper[5136]: E0320 08:32:47.397901 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:32:47 crc kubenswrapper[5136]: I0320 08:32:47.966753 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 20 08:32:47 crc kubenswrapper[5136]: I0320 08:32:47.967178 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 20 08:32:48 crc kubenswrapper[5136]: I0320 08:32:48.203632 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 20 08:32:48 crc kubenswrapper[5136]: I0320 08:32:48.337453 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 20 08:32:49 crc kubenswrapper[5136]: I0320 08:32:49.347660 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:49 crc kubenswrapper[5136]: I0320 08:32:49.347709 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:49 crc kubenswrapper[5136]: I0320 08:32:49.444740 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:50 crc kubenswrapper[5136]: I0320 08:32:50.371648 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.326325 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-94vhs"] Mar 20 08:32:56 crc kubenswrapper[5136]: E0320 08:32:56.327042 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a91420-177b-479a-aeb6-0fdc31a375e7" containerName="init" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.327063 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a91420-177b-479a-aeb6-0fdc31a375e7" containerName="init" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.327347 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a91420-177b-479a-aeb6-0fdc31a375e7" containerName="init" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.328081 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.333149 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.338463 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-94vhs"] Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.409696 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.409990 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7b2q\" (UniqueName: \"kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.512140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7b2q\" (UniqueName: \"kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.512249 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.513426 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.541867 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7b2q\" (UniqueName: \"kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.662043 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.939574 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-94vhs"] Mar 20 08:32:56 crc kubenswrapper[5136]: W0320 08:32:56.947005 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode70bfda2_008f_4f6f_87a1_a349df41af80.slice/crio-8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23 WatchSource:0}: Error finding container 8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23: Status 404 returned error can't find the container with id 8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23 Mar 20 08:32:57 crc kubenswrapper[5136]: I0320 08:32:57.346562 5136 generic.go:334] "Generic (PLEG): container finished" podID="e70bfda2-008f-4f6f-87a1-a349df41af80" containerID="513bfb357219a2477e16b62515cf153229315c47b91f7217842266ad20b33891" exitCode=0 Mar 20 08:32:57 crc kubenswrapper[5136]: I0320 08:32:57.346856 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-94vhs" event={"ID":"e70bfda2-008f-4f6f-87a1-a349df41af80","Type":"ContainerDied","Data":"513bfb357219a2477e16b62515cf153229315c47b91f7217842266ad20b33891"} Mar 20 08:32:57 crc kubenswrapper[5136]: I0320 08:32:57.346889 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-94vhs" event={"ID":"e70bfda2-008f-4f6f-87a1-a349df41af80","Type":"ContainerStarted","Data":"8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23"} Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.658242 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.745281 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7b2q\" (UniqueName: \"kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q\") pod \"e70bfda2-008f-4f6f-87a1-a349df41af80\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.745377 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts\") pod \"e70bfda2-008f-4f6f-87a1-a349df41af80\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.746162 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e70bfda2-008f-4f6f-87a1-a349df41af80" (UID: "e70bfda2-008f-4f6f-87a1-a349df41af80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.750993 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q" (OuterVolumeSpecName: "kube-api-access-s7b2q") pod "e70bfda2-008f-4f6f-87a1-a349df41af80" (UID: "e70bfda2-008f-4f6f-87a1-a349df41af80"). InnerVolumeSpecName "kube-api-access-s7b2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.847606 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7b2q\" (UniqueName: \"kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.847656 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:59 crc kubenswrapper[5136]: I0320 08:32:59.363683 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-94vhs" event={"ID":"e70bfda2-008f-4f6f-87a1-a349df41af80","Type":"ContainerDied","Data":"8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23"} Mar 20 08:32:59 crc kubenswrapper[5136]: I0320 08:32:59.364137 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23" Mar 20 08:32:59 crc kubenswrapper[5136]: I0320 08:32:59.363905 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-94vhs" Mar 20 08:33:01 crc kubenswrapper[5136]: I0320 08:33:01.398317 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:33:01 crc kubenswrapper[5136]: E0320 08:33:01.398738 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:33:02 crc kubenswrapper[5136]: I0320 08:33:02.902082 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-94vhs"] Mar 20 08:33:02 crc kubenswrapper[5136]: I0320 08:33:02.908123 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-94vhs"] Mar 20 08:33:04 crc kubenswrapper[5136]: I0320 08:33:04.416983 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e70bfda2-008f-4f6f-87a1-a349df41af80" path="/var/lib/kubelet/pods/e70bfda2-008f-4f6f-87a1-a349df41af80/volumes" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.437883 5136 generic.go:334] "Generic (PLEG): container finished" podID="49b925de-d698-4589-9f71-cf485dd617d2" containerID="ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e" exitCode=0 Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.438033 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerDied","Data":"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e"} Mar 20 08:33:07 crc kubenswrapper[5136]: E0320 08:33:07.634285 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod144d1953_0072_4346_9aa6_83afc44fdb3b.slice/crio-conmon-35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f.scope\": RecentStats: unable to find data in memory cache]" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.909534 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sbbcl"] Mar 20 08:33:07 crc kubenswrapper[5136]: E0320 08:33:07.909910 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70bfda2-008f-4f6f-87a1-a349df41af80" containerName="mariadb-account-create-update" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.909931 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70bfda2-008f-4f6f-87a1-a349df41af80" containerName="mariadb-account-create-update" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.910115 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e70bfda2-008f-4f6f-87a1-a349df41af80" containerName="mariadb-account-create-update" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.910669 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.914058 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.919266 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sbbcl"] Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.018743 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.019201 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmnkn\" (UniqueName: \"kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.120661 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.120733 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmnkn\" (UniqueName: \"kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.121924 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.142794 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmnkn\" (UniqueName: \"kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.225642 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.446193 5136 generic.go:334] "Generic (PLEG): container finished" podID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerID="35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f" exitCode=0 Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.446292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerDied","Data":"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f"} Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.451201 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerStarted","Data":"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682"} Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.452325 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.507632 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.705989842 podStartE2EDuration="1m3.507597248s" podCreationTimestamp="2026-03-20 08:32:05 +0000 UTC" firstStartedPulling="2026-03-20 08:32:07.456512825 +0000 UTC m=+6159.715823966" lastFinishedPulling="2026-03-20 08:32:34.258120221 +0000 UTC m=+6186.517431372" observedRunningTime="2026-03-20 08:33:08.506865875 +0000 UTC m=+6220.766177046" watchObservedRunningTime="2026-03-20 08:33:08.507597248 +0000 UTC m=+6220.766908409" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.633965 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sbbcl"] Mar 20 08:33:08 crc kubenswrapper[5136]: W0320 08:33:08.636227 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda25720ab_064e_40ce_ae93_03dd9c33cf66.slice/crio-ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4 WatchSource:0}: Error finding container ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4: Status 404 returned error can't find the container with id ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4 Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.463128 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerStarted","Data":"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef"} Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.463764 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.466748 5136 generic.go:334] "Generic (PLEG): container finished" podID="a25720ab-064e-40ce-ae93-03dd9c33cf66" containerID="77614674db5f14222adee033ce4bf5c60259ff8d124c2f5a8301de91d769caa0" exitCode=0 Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.467183 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbbcl" event={"ID":"a25720ab-064e-40ce-ae93-03dd9c33cf66","Type":"ContainerDied","Data":"77614674db5f14222adee033ce4bf5c60259ff8d124c2f5a8301de91d769caa0"} Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.467228 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbbcl" event={"ID":"a25720ab-064e-40ce-ae93-03dd9c33cf66","Type":"ContainerStarted","Data":"ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4"} Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.501529 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.943516869 podStartE2EDuration="1m4.501509107s" podCreationTimestamp="2026-03-20 08:32:05 +0000 UTC" firstStartedPulling="2026-03-20 08:32:07.717042225 +0000 UTC m=+6159.976353376" lastFinishedPulling="2026-03-20 08:32:34.275034463 +0000 UTC m=+6186.534345614" observedRunningTime="2026-03-20 08:33:09.493330523 +0000 UTC m=+6221.752641714" watchObservedRunningTime="2026-03-20 08:33:09.501509107 +0000 UTC m=+6221.760820268" Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.736096 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.871945 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts\") pod \"a25720ab-064e-40ce-ae93-03dd9c33cf66\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.872024 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmnkn\" (UniqueName: \"kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn\") pod \"a25720ab-064e-40ce-ae93-03dd9c33cf66\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.872596 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a25720ab-064e-40ce-ae93-03dd9c33cf66" (UID: "a25720ab-064e-40ce-ae93-03dd9c33cf66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.879668 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn" (OuterVolumeSpecName: "kube-api-access-zmnkn") pod "a25720ab-064e-40ce-ae93-03dd9c33cf66" (UID: "a25720ab-064e-40ce-ae93-03dd9c33cf66"). InnerVolumeSpecName "kube-api-access-zmnkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.974135 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.974181 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmnkn\" (UniqueName: \"kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:11 crc kubenswrapper[5136]: I0320 08:33:11.483463 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbbcl" event={"ID":"a25720ab-064e-40ce-ae93-03dd9c33cf66","Type":"ContainerDied","Data":"ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4"} Mar 20 08:33:11 crc kubenswrapper[5136]: I0320 08:33:11.483508 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:11 crc kubenswrapper[5136]: I0320 08:33:11.483508 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4" Mar 20 08:33:13 crc kubenswrapper[5136]: I0320 08:33:13.396539 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:33:13 crc kubenswrapper[5136]: E0320 08:33:13.396797 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.345360 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:26 crc kubenswrapper[5136]: E0320 08:33:26.347448 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25720ab-064e-40ce-ae93-03dd9c33cf66" containerName="mariadb-account-create-update" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.347562 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25720ab-064e-40ce-ae93-03dd9c33cf66" containerName="mariadb-account-create-update" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.347837 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25720ab-064e-40ce-ae93-03dd9c33cf66" containerName="mariadb-account-create-update" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.349419 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.353296 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.406251 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.406339 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.406383 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dddvp\" (UniqueName: \"kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.507788 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.507861 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dddvp\" (UniqueName: \"kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.507940 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.508419 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.508575 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.530991 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dddvp\" (UniqueName: \"kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.537117 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.538670 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.571650 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.611204 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.611298 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.611379 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9ltt\" (UniqueName: \"kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.670455 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.712400 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.712671 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9ltt\" (UniqueName: \"kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.712736 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.713146 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.713351 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.732700 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9ltt\" (UniqueName: \"kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.893505 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.979225 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.127067 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.214181 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.396540 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:33:27 crc kubenswrapper[5136]: E0320 08:33:27.396752 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.439645 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:27 crc kubenswrapper[5136]: W0320 08:33:27.470399 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d67c5ab_0096_4c4a_aaf1_f7b2e0ea2281.slice/crio-74206438e8b8abc064083d451300813bdaef59c88deed5a10e498dc1f1c9554c WatchSource:0}: Error finding container 74206438e8b8abc064083d451300813bdaef59c88deed5a10e498dc1f1c9554c: Status 404 returned error can't find the container with id 74206438e8b8abc064083d451300813bdaef59c88deed5a10e498dc1f1c9554c Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.598804 5136 generic.go:334] "Generic (PLEG): container finished" podID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerID="d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a" exitCode=0 Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.598902 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerDied","Data":"d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a"} Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.598947 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerStarted","Data":"56b0c4c299b46691e68d61c887ac0f6c17c1ba615dd505f5078a7f25fba0c4ba"} Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.600126 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerStarted","Data":"74206438e8b8abc064083d451300813bdaef59c88deed5a10e498dc1f1c9554c"} Mar 20 08:33:28 crc kubenswrapper[5136]: I0320 08:33:28.608388 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerStarted","Data":"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175"} Mar 20 08:33:28 crc kubenswrapper[5136]: I0320 08:33:28.609461 5136 generic.go:334] "Generic (PLEG): container finished" podID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerID="2ec94eecde00968de015f346bb8ec607450091902b850fcfe7165fa270145bd4" exitCode=0 Mar 20 08:33:28 crc kubenswrapper[5136]: I0320 08:33:28.609496 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerDied","Data":"2ec94eecde00968de015f346bb8ec607450091902b850fcfe7165fa270145bd4"} Mar 20 08:33:29 crc kubenswrapper[5136]: I0320 08:33:29.617428 5136 generic.go:334] "Generic (PLEG): container finished" podID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerID="31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175" exitCode=0 Mar 20 08:33:29 crc kubenswrapper[5136]: I0320 08:33:29.617507 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerDied","Data":"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175"} Mar 20 08:33:29 crc kubenswrapper[5136]: I0320 08:33:29.622023 5136 generic.go:334] "Generic (PLEG): container finished" podID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerID="2efc9186cba61829377c5c9c206d8e8991b7bb27d75aebe001c623815beef975" exitCode=0 Mar 20 08:33:29 crc kubenswrapper[5136]: I0320 08:33:29.622066 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerDied","Data":"2efc9186cba61829377c5c9c206d8e8991b7bb27d75aebe001c623815beef975"} Mar 20 08:33:30 crc kubenswrapper[5136]: I0320 08:33:30.631438 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerStarted","Data":"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5"} Mar 20 08:33:30 crc kubenswrapper[5136]: I0320 08:33:30.634212 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerStarted","Data":"f3e6832d2cbe196dd11784002f038348557fa97e9a13d320c194d2a3db9f6999"} Mar 20 08:33:30 crc kubenswrapper[5136]: I0320 08:33:30.658983 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-87czr" podStartSLOduration=2.201307266 podStartE2EDuration="4.658965871s" podCreationTimestamp="2026-03-20 08:33:26 +0000 UTC" firstStartedPulling="2026-03-20 08:33:27.600310318 +0000 UTC m=+6239.859621469" lastFinishedPulling="2026-03-20 08:33:30.057968923 +0000 UTC m=+6242.317280074" observedRunningTime="2026-03-20 08:33:30.653934135 +0000 UTC m=+6242.913245286" watchObservedRunningTime="2026-03-20 08:33:30.658965871 +0000 UTC m=+6242.918277022" Mar 20 08:33:30 crc kubenswrapper[5136]: I0320 08:33:30.678037 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9dsr" podStartSLOduration=3.285593538 podStartE2EDuration="4.678014682s" podCreationTimestamp="2026-03-20 08:33:26 +0000 UTC" firstStartedPulling="2026-03-20 08:33:28.611316057 +0000 UTC m=+6240.870627208" lastFinishedPulling="2026-03-20 08:33:30.003737201 +0000 UTC m=+6242.263048352" observedRunningTime="2026-03-20 08:33:30.671503279 +0000 UTC m=+6242.930814430" watchObservedRunningTime="2026-03-20 08:33:30.678014682 +0000 UTC m=+6242.937325843" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.860845 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.861993 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.881906 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.897526 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.897565 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.897668 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxv65\" (UniqueName: \"kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.998685 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.998745 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.998922 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxv65\" (UniqueName: \"kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.000031 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.000190 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.022212 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxv65\" (UniqueName: \"kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.181500 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.588158 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.680978 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:33:34 crc kubenswrapper[5136]: I0320 08:33:34.469152 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:34 crc kubenswrapper[5136]: I0320 08:33:34.667221 5136 generic.go:334] "Generic (PLEG): container finished" podID="e61df6ca-2419-400a-8790-9695f75c6d92" containerID="02dd795cb150362efe906bc099f470a71a335d9458efc922a23eb6c04569901e" exitCode=0 Mar 20 08:33:34 crc kubenswrapper[5136]: I0320 08:33:34.667271 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" event={"ID":"e61df6ca-2419-400a-8790-9695f75c6d92","Type":"ContainerDied","Data":"02dd795cb150362efe906bc099f470a71a335d9458efc922a23eb6c04569901e"} Mar 20 08:33:34 crc kubenswrapper[5136]: I0320 08:33:34.667301 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" event={"ID":"e61df6ca-2419-400a-8790-9695f75c6d92","Type":"ContainerStarted","Data":"5f995642f784fe24cc982d1a64669bee0b35f9981d3b63db2aa6b8236cd2ea18"} Mar 20 08:33:35 crc kubenswrapper[5136]: I0320 08:33:35.676055 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" event={"ID":"e61df6ca-2419-400a-8790-9695f75c6d92","Type":"ContainerStarted","Data":"8c3b72a05088d18b83e8fd4c523a6250996da641765be00333d607a1dfe71673"} Mar 20 08:33:35 crc kubenswrapper[5136]: I0320 08:33:35.676481 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:35 crc kubenswrapper[5136]: I0320 08:33:35.700968 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" podStartSLOduration=3.700950191 podStartE2EDuration="3.700950191s" podCreationTimestamp="2026-03-20 08:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:33:35.697201024 +0000 UTC m=+6247.956512175" watchObservedRunningTime="2026-03-20 08:33:35.700950191 +0000 UTC m=+6247.960261342" Mar 20 08:33:36 crc kubenswrapper[5136]: I0320 08:33:36.671089 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:36 crc kubenswrapper[5136]: I0320 08:33:36.671162 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:36 crc kubenswrapper[5136]: I0320 08:33:36.894236 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:36 crc kubenswrapper[5136]: I0320 08:33:36.894595 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:36 crc kubenswrapper[5136]: I0320 08:33:36.943005 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:37 crc kubenswrapper[5136]: I0320 08:33:37.729472 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:37 crc kubenswrapper[5136]: I0320 08:33:37.743547 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-87czr" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="registry-server" probeResult="failure" output=< Mar 20 08:33:37 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:33:37 crc kubenswrapper[5136]: > Mar 20 08:33:37 crc kubenswrapper[5136]: I0320 08:33:37.772679 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:37 crc kubenswrapper[5136]: I0320 08:33:37.815212 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="rabbitmq" containerID="cri-o://112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682" gracePeriod=604796 Mar 20 08:33:38 crc kubenswrapper[5136]: I0320 08:33:38.801851 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="rabbitmq" containerID="cri-o://63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef" gracePeriod=604796 Mar 20 08:33:39 crc kubenswrapper[5136]: I0320 08:33:39.396843 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:33:39 crc kubenswrapper[5136]: E0320 08:33:39.397229 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:33:39 crc kubenswrapper[5136]: I0320 08:33:39.704120 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l9dsr" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="registry-server" containerID="cri-o://f3e6832d2cbe196dd11784002f038348557fa97e9a13d320c194d2a3db9f6999" gracePeriod=2 Mar 20 08:33:40 crc kubenswrapper[5136]: I0320 08:33:40.713166 5136 generic.go:334] "Generic (PLEG): container finished" podID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerID="f3e6832d2cbe196dd11784002f038348557fa97e9a13d320c194d2a3db9f6999" exitCode=0 Mar 20 08:33:40 crc kubenswrapper[5136]: I0320 08:33:40.713257 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerDied","Data":"f3e6832d2cbe196dd11784002f038348557fa97e9a13d320c194d2a3db9f6999"} Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.248857 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.353803 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9ltt\" (UniqueName: \"kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt\") pod \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.353954 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities\") pod \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.353983 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content\") pod \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.355397 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities" (OuterVolumeSpecName: "utilities") pod "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" (UID: "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.360308 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt" (OuterVolumeSpecName: "kube-api-access-x9ltt") pod "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" (UID: "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281"). InnerVolumeSpecName "kube-api-access-x9ltt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.409104 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" (UID: "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.457355 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.457425 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.457453 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9ltt\" (UniqueName: \"kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.721386 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerDied","Data":"74206438e8b8abc064083d451300813bdaef59c88deed5a10e498dc1f1c9554c"} Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.721433 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.721439 5136 scope.go:117] "RemoveContainer" containerID="f3e6832d2cbe196dd11784002f038348557fa97e9a13d320c194d2a3db9f6999" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.738726 5136 scope.go:117] "RemoveContainer" containerID="2efc9186cba61829377c5c9c206d8e8991b7bb27d75aebe001c623815beef975" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.752124 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.767612 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.776728 5136 scope.go:117] "RemoveContainer" containerID="2ec94eecde00968de015f346bb8ec607450091902b850fcfe7165fa270145bd4" Mar 20 08:33:42 crc kubenswrapper[5136]: I0320 08:33:42.406621 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" path="/var/lib/kubelet/pods/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281/volumes" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.183033 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.266055 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.266514 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="dnsmasq-dns" containerID="cri-o://4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d" gracePeriod=10 Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.689942 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.697389 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config\") pod \"da7b3de9-906c-4470-9b45-498268d7161b\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.697432 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc\") pod \"da7b3de9-906c-4470-9b45-498268d7161b\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.697519 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6w6c\" (UniqueName: \"kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c\") pod \"da7b3de9-906c-4470-9b45-498268d7161b\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.702194 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c" (OuterVolumeSpecName: "kube-api-access-r6w6c") pod "da7b3de9-906c-4470-9b45-498268d7161b" (UID: "da7b3de9-906c-4470-9b45-498268d7161b"). InnerVolumeSpecName "kube-api-access-r6w6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.737456 5136 generic.go:334] "Generic (PLEG): container finished" podID="da7b3de9-906c-4470-9b45-498268d7161b" containerID="4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d" exitCode=0 Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.737794 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" event={"ID":"da7b3de9-906c-4470-9b45-498268d7161b","Type":"ContainerDied","Data":"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d"} Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.737849 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" event={"ID":"da7b3de9-906c-4470-9b45-498268d7161b","Type":"ContainerDied","Data":"b875bc351439176b0ec46aee9af86021a8df8774ef7b812887178dc8835e12d7"} Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.737938 5136 scope.go:117] "RemoveContainer" containerID="4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.738335 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.773444 5136 scope.go:117] "RemoveContainer" containerID="7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.774158 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config" (OuterVolumeSpecName: "config") pod "da7b3de9-906c-4470-9b45-498268d7161b" (UID: "da7b3de9-906c-4470-9b45-498268d7161b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.779153 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da7b3de9-906c-4470-9b45-498268d7161b" (UID: "da7b3de9-906c-4470-9b45-498268d7161b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.790547 5136 scope.go:117] "RemoveContainer" containerID="4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d" Mar 20 08:33:43 crc kubenswrapper[5136]: E0320 08:33:43.790961 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d\": container with ID starting with 4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d not found: ID does not exist" containerID="4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.790992 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d"} err="failed to get container status \"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d\": rpc error: code = NotFound desc = could not find container \"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d\": container with ID starting with 4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d not found: ID does not exist" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.791013 5136 scope.go:117] "RemoveContainer" containerID="7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0" Mar 20 08:33:43 crc kubenswrapper[5136]: E0320 08:33:43.791207 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0\": container with ID starting with 7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0 not found: ID does not exist" containerID="7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.791243 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0"} err="failed to get container status \"7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0\": rpc error: code = NotFound desc = could not find container \"7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0\": container with ID starting with 7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0 not found: ID does not exist" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.798720 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.798742 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6w6c\" (UniqueName: \"kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.798753 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.072479 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.077611 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.278543 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.406707 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da7b3de9-906c-4470-9b45-498268d7161b" path="/var/lib/kubelet/pods/da7b3de9-906c-4470-9b45-498268d7161b/volumes" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.409466 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5rbp\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.409864 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410021 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410057 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410090 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410121 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410170 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410346 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410931 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410976 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.411004 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.411327 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.411719 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.412424 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.414861 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.414898 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info" (OuterVolumeSpecName: "pod-info") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.415060 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.418016 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp" (OuterVolumeSpecName: "kube-api-access-q5rbp") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "kube-api-access-q5rbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.423932 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.435144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0" (OuterVolumeSpecName: "persistence") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.436972 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data" (OuterVolumeSpecName: "config-data") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.457205 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf" (OuterVolumeSpecName: "server-conf") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.484030 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513187 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513223 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513233 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513242 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513281 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") on node \"crc\" " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513294 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513302 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513311 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513320 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5rbp\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513328 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.528532 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.528688 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0") on node "crc" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.615147 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.745255 5136 generic.go:334] "Generic (PLEG): container finished" podID="49b925de-d698-4589-9f71-cf485dd617d2" containerID="112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682" exitCode=0 Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.745351 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.745859 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerDied","Data":"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682"} Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.745906 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerDied","Data":"e183efadd223541a56fac8eea2671471e01e5264c953e53547b41494176cef67"} Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.745921 5136 scope.go:117] "RemoveContainer" containerID="112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.767572 5136 scope.go:117] "RemoveContainer" containerID="ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.776974 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.782796 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.804649 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805094 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="extract-utilities" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805130 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="extract-utilities" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805146 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="extract-content" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805152 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="extract-content" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805170 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="setup-container" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805175 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="setup-container" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805187 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="init" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805212 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="init" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805234 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="rabbitmq" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805239 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="rabbitmq" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805250 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="dnsmasq-dns" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805256 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="dnsmasq-dns" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805266 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="registry-server" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805290 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="registry-server" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805451 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="registry-server" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805466 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="rabbitmq" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805474 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="dnsmasq-dns" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.806444 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.811492 5136 scope.go:117] "RemoveContainer" containerID="112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.812200 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682\": container with ID starting with 112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682 not found: ID does not exist" containerID="112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.812303 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682"} err="failed to get container status \"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682\": rpc error: code = NotFound desc = could not find container \"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682\": container with ID starting with 112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682 not found: ID does not exist" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.812356 5136 scope.go:117] "RemoveContainer" containerID="ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.812853 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e\": container with ID starting with ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e not found: ID does not exist" containerID="ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.812884 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e"} err="failed to get container status \"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e\": rpc error: code = NotFound desc = could not find container \"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e\": container with ID starting with ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e not found: ID does not exist" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.817764 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818246 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818413 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818464 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818639 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818671 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818643 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hxrcr" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818801 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.918850 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.918900 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfqqz\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.918951 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.918974 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919013 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919064 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919079 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919092 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919155 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919177 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919194 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021011 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021088 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021113 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021137 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021210 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021238 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021871 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021934 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021975 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfqqz\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.022028 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.022079 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.022487 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.022481 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.022626 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.023296 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.024475 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.025297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.025428 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.026482 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.027128 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.027195 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ffa5bed1f53993894ec26cbc3fe1cf1f67f60a4766508e053a6d4d74251ebc8b/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.035559 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.042752 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfqqz\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.057324 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.167592 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.281473 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.426979 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427334 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427377 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427449 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427513 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427537 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427604 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22ffp\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427835 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427881 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427905 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.429689 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.429939 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.430053 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.432086 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.434926 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.436862 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info" (OuterVolumeSpecName: "pod-info") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.438840 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp" (OuterVolumeSpecName: "kube-api-access-22ffp") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "kube-api-access-22ffp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.448754 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e" (OuterVolumeSpecName: "persistence") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.457790 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data" (OuterVolumeSpecName: "config-data") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.475311 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf" (OuterVolumeSpecName: "server-conf") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530026 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22ffp\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530070 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") on node \"crc\" " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530084 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530095 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530105 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530114 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530125 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530135 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530143 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530151 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.545213 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.545307 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.545452 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e") on node "crc" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.631469 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.631506 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: W0320 08:33:45.656200 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804d1bff_7c63_45a1_bf1a_68f3eedb6ac7.slice/crio-2628ea0efb2aa853724bc88efed7cc193022127cf5eeb6dafde84a174a83f933 WatchSource:0}: Error finding container 2628ea0efb2aa853724bc88efed7cc193022127cf5eeb6dafde84a174a83f933: Status 404 returned error can't find the container with id 2628ea0efb2aa853724bc88efed7cc193022127cf5eeb6dafde84a174a83f933 Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.658625 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.757360 5136 generic.go:334] "Generic (PLEG): container finished" podID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerID="63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef" exitCode=0 Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.757416 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.757476 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerDied","Data":"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef"} Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.757551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerDied","Data":"7392b8c85d117a71e5c4a2c47ce52f8f48e947a5928969391300b523dfc80f5d"} Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.757581 5136 scope.go:117] "RemoveContainer" containerID="63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.764045 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerStarted","Data":"2628ea0efb2aa853724bc88efed7cc193022127cf5eeb6dafde84a174a83f933"} Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.793284 5136 scope.go:117] "RemoveContainer" containerID="35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.794214 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.800650 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.817903 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:45 crc kubenswrapper[5136]: E0320 08:33:45.818194 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="rabbitmq" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.818209 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="rabbitmq" Mar 20 08:33:45 crc kubenswrapper[5136]: E0320 08:33:45.818240 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="setup-container" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.818248 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="setup-container" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.818374 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="rabbitmq" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.819026 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.820858 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.821002 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.821029 5136 scope.go:117] "RemoveContainer" containerID="63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.821244 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rxs59" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.821296 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 20 08:33:45 crc kubenswrapper[5136]: E0320 08:33:45.822005 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef\": container with ID starting with 63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef not found: ID does not exist" containerID="63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.822058 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef"} err="failed to get container status \"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef\": rpc error: code = NotFound desc = could not find container \"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef\": container with ID starting with 63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef not found: ID does not exist" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.822080 5136 scope.go:117] "RemoveContainer" containerID="35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f" Mar 20 08:33:45 crc kubenswrapper[5136]: E0320 08:33:45.822427 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f\": container with ID starting with 35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f not found: ID does not exist" containerID="35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.822449 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f"} err="failed to get container status \"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f\": rpc error: code = NotFound desc = could not find container \"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f\": container with ID starting with 35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f not found: ID does not exist" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.825304 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.825417 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.825578 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.869932 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.934666 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.934715 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.934762 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.934977 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935323 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935543 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zckzm\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935713 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935883 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935988 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.936121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038002 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038075 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038116 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038157 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038211 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zckzm\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038254 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038337 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038392 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038446 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038486 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038980 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.039099 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.039544 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.039893 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.040304 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.040918 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.041183 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d0b8d36279754dae866b74592d574f198fefe86644de71828fcab427244d57e1/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.043926 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.044096 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.044162 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.048296 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.055630 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zckzm\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.072577 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.150427 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.406181 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" path="/var/lib/kubelet/pods/144d1953-0072-4346-9aa6-83afc44fdb3b/volumes" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.407105 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b925de-d698-4589-9f71-cf485dd617d2" path="/var/lib/kubelet/pods/49b925de-d698-4589-9f71-cf485dd617d2/volumes" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.492288 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.718389 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.769590 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.794985 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerStarted","Data":"25cfdad0d21b0236b303f371c98360bf9fc45a61724844374d71ad0ec3fcc738"} Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.799993 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerStarted","Data":"9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d"} Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.950512 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:47 crc kubenswrapper[5136]: I0320 08:33:47.806078 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-87czr" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="registry-server" containerID="cri-o://c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5" gracePeriod=2 Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.280483 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.374229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dddvp\" (UniqueName: \"kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp\") pod \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.374394 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities\") pod \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.374426 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content\") pod \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.375304 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities" (OuterVolumeSpecName: "utilities") pod "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" (UID: "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.382781 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp" (OuterVolumeSpecName: "kube-api-access-dddvp") pod "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" (UID: "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1"). InnerVolumeSpecName "kube-api-access-dddvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.476592 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.476620 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dddvp\" (UniqueName: \"kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.525244 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" (UID: "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.577783 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.815580 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerStarted","Data":"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9"} Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.818579 5136 generic.go:334] "Generic (PLEG): container finished" podID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerID="c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5" exitCode=0 Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.818640 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerDied","Data":"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5"} Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.818661 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.818680 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerDied","Data":"56b0c4c299b46691e68d61c887ac0f6c17c1ba615dd505f5078a7f25fba0c4ba"} Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.818702 5136 scope.go:117] "RemoveContainer" containerID="c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.843202 5136 scope.go:117] "RemoveContainer" containerID="31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.889011 5136 scope.go:117] "RemoveContainer" containerID="d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.889239 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.895117 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.907592 5136 scope.go:117] "RemoveContainer" containerID="c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5" Mar 20 08:33:48 crc kubenswrapper[5136]: E0320 08:33:48.908031 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5\": container with ID starting with c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5 not found: ID does not exist" containerID="c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.908066 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5"} err="failed to get container status \"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5\": rpc error: code = NotFound desc = could not find container \"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5\": container with ID starting with c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5 not found: ID does not exist" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.908092 5136 scope.go:117] "RemoveContainer" containerID="31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175" Mar 20 08:33:48 crc kubenswrapper[5136]: E0320 08:33:48.908382 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175\": container with ID starting with 31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175 not found: ID does not exist" containerID="31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.908411 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175"} err="failed to get container status \"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175\": rpc error: code = NotFound desc = could not find container \"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175\": container with ID starting with 31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175 not found: ID does not exist" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.908430 5136 scope.go:117] "RemoveContainer" containerID="d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a" Mar 20 08:33:48 crc kubenswrapper[5136]: E0320 08:33:48.908785 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a\": container with ID starting with d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a not found: ID does not exist" containerID="d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.908811 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a"} err="failed to get container status \"d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a\": rpc error: code = NotFound desc = could not find container \"d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a\": container with ID starting with d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a not found: ID does not exist" Mar 20 08:33:50 crc kubenswrapper[5136]: I0320 08:33:50.409937 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" path="/var/lib/kubelet/pods/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1/volumes" Mar 20 08:33:51 crc kubenswrapper[5136]: I0320 08:33:51.397477 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:33:51 crc kubenswrapper[5136]: E0320 08:33:51.397915 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.185405 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566594-shcj9"] Mar 20 08:34:00 crc kubenswrapper[5136]: E0320 08:34:00.186490 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="extract-content" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.186519 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="extract-content" Mar 20 08:34:00 crc kubenswrapper[5136]: E0320 08:34:00.186588 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="registry-server" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.186601 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="registry-server" Mar 20 08:34:00 crc kubenswrapper[5136]: E0320 08:34:00.186616 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="extract-utilities" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.186626 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="extract-utilities" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.187210 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="registry-server" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.188393 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.199767 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.200366 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.201251 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.202348 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566594-shcj9"] Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.290142 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znhww\" (UniqueName: \"kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww\") pod \"auto-csr-approver-29566594-shcj9\" (UID: \"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e\") " pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.393135 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znhww\" (UniqueName: \"kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww\") pod \"auto-csr-approver-29566594-shcj9\" (UID: \"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e\") " pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.415903 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znhww\" (UniqueName: \"kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww\") pod \"auto-csr-approver-29566594-shcj9\" (UID: \"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e\") " pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.531575 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.995410 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566594-shcj9"] Mar 20 08:34:01 crc kubenswrapper[5136]: W0320 08:34:01.004510 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948b6ddf_f1f2_46ef_9d9f_1e07c71f593e.slice/crio-5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c WatchSource:0}: Error finding container 5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c: Status 404 returned error can't find the container with id 5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c Mar 20 08:34:01 crc kubenswrapper[5136]: I0320 08:34:01.937333 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566594-shcj9" event={"ID":"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e","Type":"ContainerStarted","Data":"5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c"} Mar 20 08:34:02 crc kubenswrapper[5136]: I0320 08:34:02.946033 5136 generic.go:334] "Generic (PLEG): container finished" podID="948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" containerID="46b6ff50442d3c65cf954ac428d83a34b6951bd632d4aff5243b9fdb9f413511" exitCode=0 Mar 20 08:34:02 crc kubenswrapper[5136]: I0320 08:34:02.946247 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566594-shcj9" event={"ID":"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e","Type":"ContainerDied","Data":"46b6ff50442d3c65cf954ac428d83a34b6951bd632d4aff5243b9fdb9f413511"} Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.240654 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.347170 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znhww\" (UniqueName: \"kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww\") pod \"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e\" (UID: \"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e\") " Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.352850 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww" (OuterVolumeSpecName: "kube-api-access-znhww") pod "948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" (UID: "948b6ddf-f1f2-46ef-9d9f-1e07c71f593e"). InnerVolumeSpecName "kube-api-access-znhww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.397192 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:34:04 crc kubenswrapper[5136]: E0320 08:34:04.397408 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.449163 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znhww\" (UniqueName: \"kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww\") on node \"crc\" DevicePath \"\"" Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.960799 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566594-shcj9" event={"ID":"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e","Type":"ContainerDied","Data":"5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c"} Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.960869 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.960876 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c" Mar 20 08:34:05 crc kubenswrapper[5136]: I0320 08:34:05.323977 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566588-jjbzp"] Mar 20 08:34:05 crc kubenswrapper[5136]: I0320 08:34:05.328550 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566588-jjbzp"] Mar 20 08:34:06 crc kubenswrapper[5136]: I0320 08:34:06.405309 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2b685a-cbe9-4989-87d2-09c8c1b3a846" path="/var/lib/kubelet/pods/ca2b685a-cbe9-4989-87d2-09c8c1b3a846/volumes" Mar 20 08:34:19 crc kubenswrapper[5136]: E0320 08:34:19.009412 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804d1bff_7c63_45a1_bf1a_68f3eedb6ac7.slice/crio-9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804d1bff_7c63_45a1_bf1a_68f3eedb6ac7.slice/crio-conmon-9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d.scope\": RecentStats: unable to find data in memory cache]" Mar 20 08:34:19 crc kubenswrapper[5136]: I0320 08:34:19.082397 5136 generic.go:334] "Generic (PLEG): container finished" podID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerID="9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d" exitCode=0 Mar 20 08:34:19 crc kubenswrapper[5136]: I0320 08:34:19.082511 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerDied","Data":"9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d"} Mar 20 08:34:19 crc kubenswrapper[5136]: I0320 08:34:19.398911 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:34:19 crc kubenswrapper[5136]: E0320 08:34:19.399259 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:34:20 crc kubenswrapper[5136]: I0320 08:34:20.091476 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerStarted","Data":"55989c472a0077640a315a8d0db45eb57abfdba8bbd4415c0bb59c9d232cd911"} Mar 20 08:34:20 crc kubenswrapper[5136]: I0320 08:34:20.092017 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 20 08:34:20 crc kubenswrapper[5136]: I0320 08:34:20.120500 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.120473576 podStartE2EDuration="36.120473576s" podCreationTimestamp="2026-03-20 08:33:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:34:20.111700094 +0000 UTC m=+6292.371011275" watchObservedRunningTime="2026-03-20 08:34:20.120473576 +0000 UTC m=+6292.379784767" Mar 20 08:34:21 crc kubenswrapper[5136]: I0320 08:34:21.102228 5136 generic.go:334] "Generic (PLEG): container finished" podID="e2c9ab46-3143-4472-a606-cd75def78f41" containerID="ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9" exitCode=0 Mar 20 08:34:21 crc kubenswrapper[5136]: I0320 08:34:21.102284 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerDied","Data":"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9"} Mar 20 08:34:22 crc kubenswrapper[5136]: I0320 08:34:22.113302 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerStarted","Data":"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80"} Mar 20 08:34:22 crc kubenswrapper[5136]: I0320 08:34:22.113749 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:34:22 crc kubenswrapper[5136]: I0320 08:34:22.143768 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.143739294 podStartE2EDuration="37.143739294s" podCreationTimestamp="2026-03-20 08:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:34:22.140498133 +0000 UTC m=+6294.399809294" watchObservedRunningTime="2026-03-20 08:34:22.143739294 +0000 UTC m=+6294.403050485" Mar 20 08:34:34 crc kubenswrapper[5136]: I0320 08:34:34.397270 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:34:34 crc kubenswrapper[5136]: E0320 08:34:34.398180 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:34:35 crc kubenswrapper[5136]: I0320 08:34:35.171069 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 20 08:34:36 crc kubenswrapper[5136]: I0320 08:34:36.153015 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:34:37 crc kubenswrapper[5136]: I0320 08:34:37.618920 5136 scope.go:117] "RemoveContainer" containerID="0bdf2244928c50e418739f666f637d0c122d85d20e0278df3b68b937bca89d79" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.689480 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 20 08:34:39 crc kubenswrapper[5136]: E0320 08:34:39.690579 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" containerName="oc" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.690597 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" containerName="oc" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.690799 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" containerName="oc" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.691454 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.696354 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-k465q" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.746944 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.778910 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvb59\" (UniqueName: \"kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59\") pod \"mariadb-client\" (UID: \"0be956a9-9d09-4611-9ac2-47c5f7e43adb\") " pod="openstack/mariadb-client" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.880521 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvb59\" (UniqueName: \"kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59\") pod \"mariadb-client\" (UID: \"0be956a9-9d09-4611-9ac2-47c5f7e43adb\") " pod="openstack/mariadb-client" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.898152 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvb59\" (UniqueName: \"kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59\") pod \"mariadb-client\" (UID: \"0be956a9-9d09-4611-9ac2-47c5f7e43adb\") " pod="openstack/mariadb-client" Mar 20 08:34:40 crc kubenswrapper[5136]: I0320 08:34:40.063031 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:34:40 crc kubenswrapper[5136]: I0320 08:34:40.578481 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:34:41 crc kubenswrapper[5136]: I0320 08:34:41.266605 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0be956a9-9d09-4611-9ac2-47c5f7e43adb","Type":"ContainerStarted","Data":"e1e18e45f51189c0abfd018ef596ef0adb8f22c53a722e833c202dc76c200ff9"} Mar 20 08:34:45 crc kubenswrapper[5136]: I0320 08:34:45.318805 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0be956a9-9d09-4611-9ac2-47c5f7e43adb","Type":"ContainerStarted","Data":"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109"} Mar 20 08:34:45 crc kubenswrapper[5136]: I0320 08:34:45.338178 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.405964972 podStartE2EDuration="6.338157059s" podCreationTimestamp="2026-03-20 08:34:39 +0000 UTC" firstStartedPulling="2026-03-20 08:34:40.588279311 +0000 UTC m=+6312.847590462" lastFinishedPulling="2026-03-20 08:34:44.520471378 +0000 UTC m=+6316.779782549" observedRunningTime="2026-03-20 08:34:45.334235057 +0000 UTC m=+6317.593546248" watchObservedRunningTime="2026-03-20 08:34:45.338157059 +0000 UTC m=+6317.597468220" Mar 20 08:34:47 crc kubenswrapper[5136]: I0320 08:34:47.397281 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:34:47 crc kubenswrapper[5136]: E0320 08:34:47.397839 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:34:58 crc kubenswrapper[5136]: I0320 08:34:58.402730 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:34:58 crc kubenswrapper[5136]: E0320 08:34:58.404399 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:35:00 crc kubenswrapper[5136]: I0320 08:35:00.337150 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:35:00 crc kubenswrapper[5136]: I0320 08:35:00.338294 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" containerName="mariadb-client" containerID="cri-o://ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109" gracePeriod=30 Mar 20 08:35:00 crc kubenswrapper[5136]: I0320 08:35:00.849807 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.017213 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvb59\" (UniqueName: \"kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59\") pod \"0be956a9-9d09-4611-9ac2-47c5f7e43adb\" (UID: \"0be956a9-9d09-4611-9ac2-47c5f7e43adb\") " Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.024087 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59" (OuterVolumeSpecName: "kube-api-access-bvb59") pod "0be956a9-9d09-4611-9ac2-47c5f7e43adb" (UID: "0be956a9-9d09-4611-9ac2-47c5f7e43adb"). InnerVolumeSpecName "kube-api-access-bvb59". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.119601 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvb59\" (UniqueName: \"kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59\") on node \"crc\" DevicePath \"\"" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.444693 5136 generic.go:334] "Generic (PLEG): container finished" podID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" containerID="ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109" exitCode=143 Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.444741 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0be956a9-9d09-4611-9ac2-47c5f7e43adb","Type":"ContainerDied","Data":"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109"} Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.444798 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0be956a9-9d09-4611-9ac2-47c5f7e43adb","Type":"ContainerDied","Data":"e1e18e45f51189c0abfd018ef596ef0adb8f22c53a722e833c202dc76c200ff9"} Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.444752 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.444839 5136 scope.go:117] "RemoveContainer" containerID="ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.496002 5136 scope.go:117] "RemoveContainer" containerID="ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.496046 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:35:01 crc kubenswrapper[5136]: E0320 08:35:01.496547 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109\": container with ID starting with ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109 not found: ID does not exist" containerID="ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.496623 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109"} err="failed to get container status \"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109\": rpc error: code = NotFound desc = could not find container \"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109\": container with ID starting with ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109 not found: ID does not exist" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.501428 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:35:02 crc kubenswrapper[5136]: I0320 08:35:02.408212 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" path="/var/lib/kubelet/pods/0be956a9-9d09-4611-9ac2-47c5f7e43adb/volumes" Mar 20 08:35:12 crc kubenswrapper[5136]: I0320 08:35:12.397660 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:35:12 crc kubenswrapper[5136]: E0320 08:35:12.398473 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:35:23 crc kubenswrapper[5136]: I0320 08:35:23.396701 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:35:23 crc kubenswrapper[5136]: E0320 08:35:23.397195 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:35:37 crc kubenswrapper[5136]: I0320 08:35:37.397255 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:35:37 crc kubenswrapper[5136]: E0320 08:35:37.398434 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:35:50 crc kubenswrapper[5136]: I0320 08:35:50.396675 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:35:50 crc kubenswrapper[5136]: E0320 08:35:50.397613 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.141583 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566596-npcd2"] Mar 20 08:36:00 crc kubenswrapper[5136]: E0320 08:36:00.142497 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" containerName="mariadb-client" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.142514 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" containerName="mariadb-client" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.142700 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" containerName="mariadb-client" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.143362 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.146079 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.146419 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.146631 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.153082 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566596-npcd2"] Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.299400 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hsx8\" (UniqueName: \"kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8\") pod \"auto-csr-approver-29566596-npcd2\" (UID: \"57673048-5103-4b04-8ef3-777cb1a33601\") " pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.400967 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hsx8\" (UniqueName: \"kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8\") pod \"auto-csr-approver-29566596-npcd2\" (UID: \"57673048-5103-4b04-8ef3-777cb1a33601\") " pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.425622 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.427286 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.427399 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.432559 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hsx8\" (UniqueName: \"kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8\") pod \"auto-csr-approver-29566596-npcd2\" (UID: \"57673048-5103-4b04-8ef3-777cb1a33601\") " pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.465031 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.502092 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.502133 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqz79\" (UniqueName: \"kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.502255 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.603748 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.603803 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.603835 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqz79\" (UniqueName: \"kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.604744 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.604965 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.622689 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqz79\" (UniqueName: \"kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.779734 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.880118 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566596-npcd2"] Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.920636 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566596-npcd2" event={"ID":"57673048-5103-4b04-8ef3-777cb1a33601","Type":"ContainerStarted","Data":"25f19349296af7f41f743deb68f99919c31d1985b70314e7f931b4a5e5efad4c"} Mar 20 08:36:01 crc kubenswrapper[5136]: I0320 08:36:01.230511 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:01 crc kubenswrapper[5136]: W0320 08:36:01.232795 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod629a83e8_57da_42f6_b4f5_b7389a04f960.slice/crio-44696757fc71a063d2a7087406b0d5d06b985688fd1ca0f48622a56669ea41e7 WatchSource:0}: Error finding container 44696757fc71a063d2a7087406b0d5d06b985688fd1ca0f48622a56669ea41e7: Status 404 returned error can't find the container with id 44696757fc71a063d2a7087406b0d5d06b985688fd1ca0f48622a56669ea41e7 Mar 20 08:36:01 crc kubenswrapper[5136]: I0320 08:36:01.928876 5136 generic.go:334] "Generic (PLEG): container finished" podID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerID="c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8" exitCode=0 Mar 20 08:36:01 crc kubenswrapper[5136]: I0320 08:36:01.928928 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerDied","Data":"c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8"} Mar 20 08:36:01 crc kubenswrapper[5136]: I0320 08:36:01.929124 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerStarted","Data":"44696757fc71a063d2a7087406b0d5d06b985688fd1ca0f48622a56669ea41e7"} Mar 20 08:36:02 crc kubenswrapper[5136]: I0320 08:36:02.937536 5136 generic.go:334] "Generic (PLEG): container finished" podID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerID="b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d" exitCode=0 Mar 20 08:36:02 crc kubenswrapper[5136]: I0320 08:36:02.937590 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerDied","Data":"b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d"} Mar 20 08:36:02 crc kubenswrapper[5136]: I0320 08:36:02.939370 5136 generic.go:334] "Generic (PLEG): container finished" podID="57673048-5103-4b04-8ef3-777cb1a33601" containerID="ef67e23c79ff3a82593be1acaca432453adbd354cb50446c81640884957e8ffe" exitCode=0 Mar 20 08:36:02 crc kubenswrapper[5136]: I0320 08:36:02.939411 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566596-npcd2" event={"ID":"57673048-5103-4b04-8ef3-777cb1a33601","Type":"ContainerDied","Data":"ef67e23c79ff3a82593be1acaca432453adbd354cb50446c81640884957e8ffe"} Mar 20 08:36:03 crc kubenswrapper[5136]: I0320 08:36:03.947337 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerStarted","Data":"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc"} Mar 20 08:36:03 crc kubenswrapper[5136]: I0320 08:36:03.966070 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7mclm" podStartSLOduration=2.304934716 podStartE2EDuration="3.966052407s" podCreationTimestamp="2026-03-20 08:36:00 +0000 UTC" firstStartedPulling="2026-03-20 08:36:01.935974368 +0000 UTC m=+6394.195285519" lastFinishedPulling="2026-03-20 08:36:03.597092059 +0000 UTC m=+6395.856403210" observedRunningTime="2026-03-20 08:36:03.964950262 +0000 UTC m=+6396.224261423" watchObservedRunningTime="2026-03-20 08:36:03.966052407 +0000 UTC m=+6396.225363558" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.250996 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.357047 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hsx8\" (UniqueName: \"kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8\") pod \"57673048-5103-4b04-8ef3-777cb1a33601\" (UID: \"57673048-5103-4b04-8ef3-777cb1a33601\") " Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.363131 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8" (OuterVolumeSpecName: "kube-api-access-5hsx8") pod "57673048-5103-4b04-8ef3-777cb1a33601" (UID: "57673048-5103-4b04-8ef3-777cb1a33601"). InnerVolumeSpecName "kube-api-access-5hsx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.396504 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:36:04 crc kubenswrapper[5136]: E0320 08:36:04.396823 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.459448 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hsx8\" (UniqueName: \"kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.956956 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.956973 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566596-npcd2" event={"ID":"57673048-5103-4b04-8ef3-777cb1a33601","Type":"ContainerDied","Data":"25f19349296af7f41f743deb68f99919c31d1985b70314e7f931b4a5e5efad4c"} Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.956999 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f19349296af7f41f743deb68f99919c31d1985b70314e7f931b4a5e5efad4c" Mar 20 08:36:05 crc kubenswrapper[5136]: I0320 08:36:05.319933 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566590-9pt5f"] Mar 20 08:36:05 crc kubenswrapper[5136]: I0320 08:36:05.325055 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566590-9pt5f"] Mar 20 08:36:06 crc kubenswrapper[5136]: I0320 08:36:06.413004 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c8efa5-d30c-4426-ad6e-4aa0880c0563" path="/var/lib/kubelet/pods/51c8efa5-d30c-4426-ad6e-4aa0880c0563/volumes" Mar 20 08:36:10 crc kubenswrapper[5136]: I0320 08:36:10.780325 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:10 crc kubenswrapper[5136]: I0320 08:36:10.782790 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:10 crc kubenswrapper[5136]: I0320 08:36:10.857408 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:11 crc kubenswrapper[5136]: I0320 08:36:11.045871 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:11 crc kubenswrapper[5136]: I0320 08:36:11.095737 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.012581 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7mclm" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="registry-server" containerID="cri-o://afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc" gracePeriod=2 Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.398595 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.496485 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqz79\" (UniqueName: \"kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79\") pod \"629a83e8-57da-42f6-b4f5-b7389a04f960\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.496655 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content\") pod \"629a83e8-57da-42f6-b4f5-b7389a04f960\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.498983 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities\") pod \"629a83e8-57da-42f6-b4f5-b7389a04f960\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.499925 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities" (OuterVolumeSpecName: "utilities") pod "629a83e8-57da-42f6-b4f5-b7389a04f960" (UID: "629a83e8-57da-42f6-b4f5-b7389a04f960"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.503508 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79" (OuterVolumeSpecName: "kube-api-access-xqz79") pod "629a83e8-57da-42f6-b4f5-b7389a04f960" (UID: "629a83e8-57da-42f6-b4f5-b7389a04f960"). InnerVolumeSpecName "kube-api-access-xqz79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.601410 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqz79\" (UniqueName: \"kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.601890 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.963987 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "629a83e8-57da-42f6-b4f5-b7389a04f960" (UID: "629a83e8-57da-42f6-b4f5-b7389a04f960"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.007399 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.023750 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.023751 5136 generic.go:334] "Generic (PLEG): container finished" podID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerID="afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc" exitCode=0 Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.023780 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerDied","Data":"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc"} Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.024872 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerDied","Data":"44696757fc71a063d2a7087406b0d5d06b985688fd1ca0f48622a56669ea41e7"} Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.024895 5136 scope.go:117] "RemoveContainer" containerID="afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.043747 5136 scope.go:117] "RemoveContainer" containerID="b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.059190 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.064461 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.090386 5136 scope.go:117] "RemoveContainer" containerID="c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.113531 5136 scope.go:117] "RemoveContainer" containerID="afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc" Mar 20 08:36:14 crc kubenswrapper[5136]: E0320 08:36:14.113989 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc\": container with ID starting with afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc not found: ID does not exist" containerID="afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.114036 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc"} err="failed to get container status \"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc\": rpc error: code = NotFound desc = could not find container \"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc\": container with ID starting with afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc not found: ID does not exist" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.114066 5136 scope.go:117] "RemoveContainer" containerID="b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d" Mar 20 08:36:14 crc kubenswrapper[5136]: E0320 08:36:14.114495 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d\": container with ID starting with b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d not found: ID does not exist" containerID="b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.114542 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d"} err="failed to get container status \"b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d\": rpc error: code = NotFound desc = could not find container \"b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d\": container with ID starting with b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d not found: ID does not exist" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.114577 5136 scope.go:117] "RemoveContainer" containerID="c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8" Mar 20 08:36:14 crc kubenswrapper[5136]: E0320 08:36:14.114857 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8\": container with ID starting with c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8 not found: ID does not exist" containerID="c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.114885 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8"} err="failed to get container status \"c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8\": rpc error: code = NotFound desc = could not find container \"c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8\": container with ID starting with c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8 not found: ID does not exist" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.408959 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" path="/var/lib/kubelet/pods/629a83e8-57da-42f6-b4f5-b7389a04f960/volumes" Mar 20 08:36:19 crc kubenswrapper[5136]: I0320 08:36:19.396870 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:36:20 crc kubenswrapper[5136]: I0320 08:36:20.066167 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271"} Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.808884 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:33 crc kubenswrapper[5136]: E0320 08:36:33.810039 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="extract-content" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810067 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="extract-content" Mar 20 08:36:33 crc kubenswrapper[5136]: E0320 08:36:33.810093 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57673048-5103-4b04-8ef3-777cb1a33601" containerName="oc" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810106 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="57673048-5103-4b04-8ef3-777cb1a33601" containerName="oc" Mar 20 08:36:33 crc kubenswrapper[5136]: E0320 08:36:33.810154 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="registry-server" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810169 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="registry-server" Mar 20 08:36:33 crc kubenswrapper[5136]: E0320 08:36:33.810188 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="extract-utilities" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810201 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="extract-utilities" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810507 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="registry-server" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810545 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="57673048-5103-4b04-8ef3-777cb1a33601" containerName="oc" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.812510 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.819228 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.902841 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.902904 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkvfj\" (UniqueName: \"kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.902944 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.003994 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.004040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvfj\" (UniqueName: \"kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.004073 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.004642 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.004653 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.023085 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvfj\" (UniqueName: \"kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.169553 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.662065 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:35 crc kubenswrapper[5136]: I0320 08:36:35.176667 5136 generic.go:334] "Generic (PLEG): container finished" podID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerID="6c3988e24586b17da796a7ef157d6d1682c289a56b9f4d8591c3323366fe474a" exitCode=0 Mar 20 08:36:35 crc kubenswrapper[5136]: I0320 08:36:35.176713 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerDied","Data":"6c3988e24586b17da796a7ef157d6d1682c289a56b9f4d8591c3323366fe474a"} Mar 20 08:36:35 crc kubenswrapper[5136]: I0320 08:36:35.176738 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerStarted","Data":"e704de3f8b18e2cfcfbf126c8879ca20f897587bc167a14a1dfd40dcd96c29db"} Mar 20 08:36:36 crc kubenswrapper[5136]: I0320 08:36:36.190659 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerStarted","Data":"06b1f730aa1414898c07cbc6c51c6d45af0404b9bfb57fe9ffcd166953c77231"} Mar 20 08:36:37 crc kubenswrapper[5136]: I0320 08:36:37.199874 5136 generic.go:334] "Generic (PLEG): container finished" podID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerID="06b1f730aa1414898c07cbc6c51c6d45af0404b9bfb57fe9ffcd166953c77231" exitCode=0 Mar 20 08:36:37 crc kubenswrapper[5136]: I0320 08:36:37.200013 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerDied","Data":"06b1f730aa1414898c07cbc6c51c6d45af0404b9bfb57fe9ffcd166953c77231"} Mar 20 08:36:37 crc kubenswrapper[5136]: I0320 08:36:37.821211 5136 scope.go:117] "RemoveContainer" containerID="d67dfe1060ac0ac0db1818a3ab60ffceda0123c6ffe3b59b89e0430a3ae809a2" Mar 20 08:36:37 crc kubenswrapper[5136]: I0320 08:36:37.898951 5136 scope.go:117] "RemoveContainer" containerID="a84d841fa14dbb7d163049ae2a42d3d241fc2e9ace22731699a4238f410674cb" Mar 20 08:36:38 crc kubenswrapper[5136]: I0320 08:36:38.211088 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerStarted","Data":"0b9725a1a007e9e06dc65c9de35ec11e1988db24ddf19ad259d83257a7e02b57"} Mar 20 08:36:38 crc kubenswrapper[5136]: I0320 08:36:38.254560 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5wtjq" podStartSLOduration=2.691861855 podStartE2EDuration="5.254539798s" podCreationTimestamp="2026-03-20 08:36:33 +0000 UTC" firstStartedPulling="2026-03-20 08:36:35.179042524 +0000 UTC m=+6427.438353675" lastFinishedPulling="2026-03-20 08:36:37.741720427 +0000 UTC m=+6430.001031618" observedRunningTime="2026-03-20 08:36:38.243482235 +0000 UTC m=+6430.502793416" watchObservedRunningTime="2026-03-20 08:36:38.254539798 +0000 UTC m=+6430.513850959" Mar 20 08:36:44 crc kubenswrapper[5136]: I0320 08:36:44.170214 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:44 crc kubenswrapper[5136]: I0320 08:36:44.170765 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:44 crc kubenswrapper[5136]: I0320 08:36:44.259538 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:44 crc kubenswrapper[5136]: I0320 08:36:44.301848 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:47 crc kubenswrapper[5136]: I0320 08:36:47.779888 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:47 crc kubenswrapper[5136]: I0320 08:36:47.780342 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5wtjq" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="registry-server" containerID="cri-o://0b9725a1a007e9e06dc65c9de35ec11e1988db24ddf19ad259d83257a7e02b57" gracePeriod=2 Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.299261 5136 generic.go:334] "Generic (PLEG): container finished" podID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerID="0b9725a1a007e9e06dc65c9de35ec11e1988db24ddf19ad259d83257a7e02b57" exitCode=0 Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.299625 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerDied","Data":"0b9725a1a007e9e06dc65c9de35ec11e1988db24ddf19ad259d83257a7e02b57"} Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.466992 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.538954 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities\") pod \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.539016 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkvfj\" (UniqueName: \"kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj\") pod \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.539151 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content\") pod \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.539679 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities" (OuterVolumeSpecName: "utilities") pod "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" (UID: "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.545488 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj" (OuterVolumeSpecName: "kube-api-access-mkvfj") pod "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" (UID: "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7"). InnerVolumeSpecName "kube-api-access-mkvfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.587766 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" (UID: "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.640291 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.642785 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.642807 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkvfj\" (UniqueName: \"kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.309774 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerDied","Data":"e704de3f8b18e2cfcfbf126c8879ca20f897587bc167a14a1dfd40dcd96c29db"} Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.309973 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.310185 5136 scope.go:117] "RemoveContainer" containerID="0b9725a1a007e9e06dc65c9de35ec11e1988db24ddf19ad259d83257a7e02b57" Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.333103 5136 scope.go:117] "RemoveContainer" containerID="06b1f730aa1414898c07cbc6c51c6d45af0404b9bfb57fe9ffcd166953c77231" Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.358006 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.361595 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.375912 5136 scope.go:117] "RemoveContainer" containerID="6c3988e24586b17da796a7ef157d6d1682c289a56b9f4d8591c3323366fe474a" Mar 20 08:36:50 crc kubenswrapper[5136]: I0320 08:36:50.410548 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" path="/var/lib/kubelet/pods/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7/volumes" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.141930 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566598-9zdgt"] Mar 20 08:38:00 crc kubenswrapper[5136]: E0320 08:38:00.142893 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="registry-server" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.142907 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="registry-server" Mar 20 08:38:00 crc kubenswrapper[5136]: E0320 08:38:00.142934 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="extract-content" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.142940 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="extract-content" Mar 20 08:38:00 crc kubenswrapper[5136]: E0320 08:38:00.142974 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="extract-utilities" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.142982 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="extract-utilities" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.143165 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="registry-server" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.143977 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.146450 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.146713 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.154742 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.162619 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566598-9zdgt"] Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.281417 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4np\" (UniqueName: \"kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np\") pod \"auto-csr-approver-29566598-9zdgt\" (UID: \"9379207f-99bf-4561-8979-f27be8f510ac\") " pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.382585 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4np\" (UniqueName: \"kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np\") pod \"auto-csr-approver-29566598-9zdgt\" (UID: \"9379207f-99bf-4561-8979-f27be8f510ac\") " pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.400670 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4np\" (UniqueName: \"kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np\") pod \"auto-csr-approver-29566598-9zdgt\" (UID: \"9379207f-99bf-4561-8979-f27be8f510ac\") " pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.489545 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.904851 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566598-9zdgt"] Mar 20 08:38:00 crc kubenswrapper[5136]: W0320 08:38:00.911872 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9379207f_99bf_4561_8979_f27be8f510ac.slice/crio-13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6 WatchSource:0}: Error finding container 13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6: Status 404 returned error can't find the container with id 13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6 Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.915855 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:38:01 crc kubenswrapper[5136]: I0320 08:38:01.877248 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" event={"ID":"9379207f-99bf-4561-8979-f27be8f510ac","Type":"ContainerStarted","Data":"13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6"} Mar 20 08:38:02 crc kubenswrapper[5136]: I0320 08:38:02.889168 5136 generic.go:334] "Generic (PLEG): container finished" podID="9379207f-99bf-4561-8979-f27be8f510ac" containerID="e29edc0f4375ac391060cb753f50bdb9915298f531076d0c17e85a24815a777f" exitCode=0 Mar 20 08:38:02 crc kubenswrapper[5136]: I0320 08:38:02.889211 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" event={"ID":"9379207f-99bf-4561-8979-f27be8f510ac","Type":"ContainerDied","Data":"e29edc0f4375ac391060cb753f50bdb9915298f531076d0c17e85a24815a777f"} Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.237051 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.339321 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w4np\" (UniqueName: \"kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np\") pod \"9379207f-99bf-4561-8979-f27be8f510ac\" (UID: \"9379207f-99bf-4561-8979-f27be8f510ac\") " Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.346995 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np" (OuterVolumeSpecName: "kube-api-access-7w4np") pod "9379207f-99bf-4561-8979-f27be8f510ac" (UID: "9379207f-99bf-4561-8979-f27be8f510ac"). InnerVolumeSpecName "kube-api-access-7w4np". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.441463 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w4np\" (UniqueName: \"kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np\") on node \"crc\" DevicePath \"\"" Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.908375 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" event={"ID":"9379207f-99bf-4561-8979-f27be8f510ac","Type":"ContainerDied","Data":"13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6"} Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.908415 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6" Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.908458 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:05 crc kubenswrapper[5136]: I0320 08:38:05.306004 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566592-gnh7d"] Mar 20 08:38:05 crc kubenswrapper[5136]: I0320 08:38:05.311577 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566592-gnh7d"] Mar 20 08:38:06 crc kubenswrapper[5136]: I0320 08:38:06.405370 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2af690-159e-4938-b0b0-35e042cc8393" path="/var/lib/kubelet/pods/2e2af690-159e-4938-b0b0-35e042cc8393/volumes" Mar 20 08:38:38 crc kubenswrapper[5136]: I0320 08:38:38.046969 5136 scope.go:117] "RemoveContainer" containerID="6f1e73339774fdb849b7c14ca46c4e23637ecc11d975480f8593fb668065f9a0" Mar 20 08:38:45 crc kubenswrapper[5136]: I0320 08:38:45.821807 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:38:45 crc kubenswrapper[5136]: I0320 08:38:45.824436 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:39:15 crc kubenswrapper[5136]: I0320 08:39:15.821972 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:39:15 crc kubenswrapper[5136]: I0320 08:39:15.822552 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:39:38 crc kubenswrapper[5136]: I0320 08:39:38.115303 5136 scope.go:117] "RemoveContainer" containerID="513bfb357219a2477e16b62515cf153229315c47b91f7217842266ad20b33891" Mar 20 08:39:45 crc kubenswrapper[5136]: I0320 08:39:45.821528 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:39:45 crc kubenswrapper[5136]: I0320 08:39:45.822126 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:39:45 crc kubenswrapper[5136]: I0320 08:39:45.822175 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:39:45 crc kubenswrapper[5136]: I0320 08:39:45.823058 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:39:45 crc kubenswrapper[5136]: I0320 08:39:45.823114 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271" gracePeriod=600 Mar 20 08:39:46 crc kubenswrapper[5136]: I0320 08:39:46.744634 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271" exitCode=0 Mar 20 08:39:46 crc kubenswrapper[5136]: I0320 08:39:46.744674 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271"} Mar 20 08:39:46 crc kubenswrapper[5136]: I0320 08:39:46.745117 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a"} Mar 20 08:39:46 crc kubenswrapper[5136]: I0320 08:39:46.745152 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.145950 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566600-2m6nn"] Mar 20 08:40:00 crc kubenswrapper[5136]: E0320 08:40:00.147013 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9379207f-99bf-4561-8979-f27be8f510ac" containerName="oc" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.147025 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9379207f-99bf-4561-8979-f27be8f510ac" containerName="oc" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.147185 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9379207f-99bf-4561-8979-f27be8f510ac" containerName="oc" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.147665 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.150152 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.150381 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.150505 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.165437 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566600-2m6nn"] Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.239020 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcnw\" (UniqueName: \"kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw\") pod \"auto-csr-approver-29566600-2m6nn\" (UID: \"3480cf66-9f91-4ce8-924c-0f730044c0de\") " pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.340563 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qcnw\" (UniqueName: \"kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw\") pod \"auto-csr-approver-29566600-2m6nn\" (UID: \"3480cf66-9f91-4ce8-924c-0f730044c0de\") " pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.361705 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qcnw\" (UniqueName: \"kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw\") pod \"auto-csr-approver-29566600-2m6nn\" (UID: \"3480cf66-9f91-4ce8-924c-0f730044c0de\") " pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.512171 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.787414 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566600-2m6nn"] Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.887647 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" event={"ID":"3480cf66-9f91-4ce8-924c-0f730044c0de","Type":"ContainerStarted","Data":"ddc706e6d31258b0f9c34cf6b41b2730d0ba2080de3293fcc901fa16248cc62c"} Mar 20 08:40:02 crc kubenswrapper[5136]: I0320 08:40:02.905483 5136 generic.go:334] "Generic (PLEG): container finished" podID="3480cf66-9f91-4ce8-924c-0f730044c0de" containerID="ada9ce7b7b306f2b5dbbf312318f5ac5adc2a593ce372df15119878b742a8edb" exitCode=0 Mar 20 08:40:02 crc kubenswrapper[5136]: I0320 08:40:02.905544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" event={"ID":"3480cf66-9f91-4ce8-924c-0f730044c0de","Type":"ContainerDied","Data":"ada9ce7b7b306f2b5dbbf312318f5ac5adc2a593ce372df15119878b742a8edb"} Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.209297 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.303249 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qcnw\" (UniqueName: \"kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw\") pod \"3480cf66-9f91-4ce8-924c-0f730044c0de\" (UID: \"3480cf66-9f91-4ce8-924c-0f730044c0de\") " Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.308848 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw" (OuterVolumeSpecName: "kube-api-access-6qcnw") pod "3480cf66-9f91-4ce8-924c-0f730044c0de" (UID: "3480cf66-9f91-4ce8-924c-0f730044c0de"). InnerVolumeSpecName "kube-api-access-6qcnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.404881 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qcnw\" (UniqueName: \"kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw\") on node \"crc\" DevicePath \"\"" Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.935831 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" event={"ID":"3480cf66-9f91-4ce8-924c-0f730044c0de","Type":"ContainerDied","Data":"ddc706e6d31258b0f9c34cf6b41b2730d0ba2080de3293fcc901fa16248cc62c"} Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.935868 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc706e6d31258b0f9c34cf6b41b2730d0ba2080de3293fcc901fa16248cc62c" Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.935994 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:05 crc kubenswrapper[5136]: I0320 08:40:05.309379 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566594-shcj9"] Mar 20 08:40:05 crc kubenswrapper[5136]: I0320 08:40:05.319753 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566594-shcj9"] Mar 20 08:40:06 crc kubenswrapper[5136]: I0320 08:40:06.405125 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" path="/var/lib/kubelet/pods/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e/volumes" Mar 20 08:40:38 crc kubenswrapper[5136]: I0320 08:40:38.164481 5136 scope.go:117] "RemoveContainer" containerID="46b6ff50442d3c65cf954ac428d83a34b6951bd632d4aff5243b9fdb9f413511" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.132830 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566602-fmfs9"] Mar 20 08:42:00 crc kubenswrapper[5136]: E0320 08:42:00.153967 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3480cf66-9f91-4ce8-924c-0f730044c0de" containerName="oc" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.154010 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3480cf66-9f91-4ce8-924c-0f730044c0de" containerName="oc" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.154486 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3480cf66-9f91-4ce8-924c-0f730044c0de" containerName="oc" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.155202 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.159235 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.159381 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.159434 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.166625 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566602-fmfs9"] Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.234244 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq8zq\" (UniqueName: \"kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq\") pod \"auto-csr-approver-29566602-fmfs9\" (UID: \"78e36980-52e2-4a59-9374-b2f1150fcb20\") " pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.335573 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq8zq\" (UniqueName: \"kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq\") pod \"auto-csr-approver-29566602-fmfs9\" (UID: \"78e36980-52e2-4a59-9374-b2f1150fcb20\") " pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.365045 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq8zq\" (UniqueName: \"kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq\") pod \"auto-csr-approver-29566602-fmfs9\" (UID: \"78e36980-52e2-4a59-9374-b2f1150fcb20\") " pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.485376 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.935865 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566602-fmfs9"] Mar 20 08:42:01 crc kubenswrapper[5136]: I0320 08:42:01.935077 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" event={"ID":"78e36980-52e2-4a59-9374-b2f1150fcb20","Type":"ContainerStarted","Data":"bd20f8c76dd9d09f073435747b85590383d5c4e75188df4f3da12fea34405641"} Mar 20 08:42:02 crc kubenswrapper[5136]: I0320 08:42:02.949554 5136 generic.go:334] "Generic (PLEG): container finished" podID="78e36980-52e2-4a59-9374-b2f1150fcb20" containerID="4d16220fc9b1db88fdb1fbb167050afb3f65c942a2a02caf4ba1ec80a2858ccc" exitCode=0 Mar 20 08:42:02 crc kubenswrapper[5136]: I0320 08:42:02.949656 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" event={"ID":"78e36980-52e2-4a59-9374-b2f1150fcb20","Type":"ContainerDied","Data":"4d16220fc9b1db88fdb1fbb167050afb3f65c942a2a02caf4ba1ec80a2858ccc"} Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.329509 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.515426 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq8zq\" (UniqueName: \"kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq\") pod \"78e36980-52e2-4a59-9374-b2f1150fcb20\" (UID: \"78e36980-52e2-4a59-9374-b2f1150fcb20\") " Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.522209 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq" (OuterVolumeSpecName: "kube-api-access-sq8zq") pod "78e36980-52e2-4a59-9374-b2f1150fcb20" (UID: "78e36980-52e2-4a59-9374-b2f1150fcb20"). InnerVolumeSpecName "kube-api-access-sq8zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.617197 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq8zq\" (UniqueName: \"kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq\") on node \"crc\" DevicePath \"\"" Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.966680 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" event={"ID":"78e36980-52e2-4a59-9374-b2f1150fcb20","Type":"ContainerDied","Data":"bd20f8c76dd9d09f073435747b85590383d5c4e75188df4f3da12fea34405641"} Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.966718 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.966720 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd20f8c76dd9d09f073435747b85590383d5c4e75188df4f3da12fea34405641" Mar 20 08:42:05 crc kubenswrapper[5136]: I0320 08:42:05.428084 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566596-npcd2"] Mar 20 08:42:05 crc kubenswrapper[5136]: I0320 08:42:05.440445 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566596-npcd2"] Mar 20 08:42:06 crc kubenswrapper[5136]: I0320 08:42:06.406153 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57673048-5103-4b04-8ef3-777cb1a33601" path="/var/lib/kubelet/pods/57673048-5103-4b04-8ef3-777cb1a33601/volumes" Mar 20 08:42:15 crc kubenswrapper[5136]: I0320 08:42:15.822605 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:42:15 crc kubenswrapper[5136]: I0320 08:42:15.823593 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:42:38 crc kubenswrapper[5136]: I0320 08:42:38.268522 5136 scope.go:117] "RemoveContainer" containerID="ef67e23c79ff3a82593be1acaca432453adbd354cb50446c81640884957e8ffe" Mar 20 08:42:45 crc kubenswrapper[5136]: I0320 08:42:45.822077 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:42:45 crc kubenswrapper[5136]: I0320 08:42:45.822695 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.227210 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 08:43:05 crc kubenswrapper[5136]: E0320 08:43:05.228429 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e36980-52e2-4a59-9374-b2f1150fcb20" containerName="oc" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.228458 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e36980-52e2-4a59-9374-b2f1150fcb20" containerName="oc" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.228785 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e36980-52e2-4a59-9374-b2f1150fcb20" containerName="oc" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.230033 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.234060 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-k465q" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.235614 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.352201 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc9kw\" (UniqueName: \"kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.352303 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.454171 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc9kw\" (UniqueName: \"kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.454267 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.459336 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.459395 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f188fb104d832941e804d190179c78e8fbf49d372cf3c70e7b37a8db21f0157/globalmount\"" pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.478425 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc9kw\" (UniqueName: \"kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.486611 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.553443 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 20 08:43:06 crc kubenswrapper[5136]: I0320 08:43:06.082261 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 08:43:06 crc kubenswrapper[5136]: I0320 08:43:06.446122 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226","Type":"ContainerStarted","Data":"a92d1246191abb0ecac747ee2b70c76f1c8719ff40ea6ea858ab04c5eaa88bb1"} Mar 20 08:43:06 crc kubenswrapper[5136]: I0320 08:43:06.446197 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226","Type":"ContainerStarted","Data":"2c2faf3df1acecb9c43fb4e3dfa1b1bce7305d443462043dbca7203ee15e6fb8"} Mar 20 08:43:06 crc kubenswrapper[5136]: I0320 08:43:06.461284 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.461266155 podStartE2EDuration="2.461266155s" podCreationTimestamp="2026-03-20 08:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:43:06.459467668 +0000 UTC m=+6818.718778819" watchObservedRunningTime="2026-03-20 08:43:06.461266155 +0000 UTC m=+6818.720577306" Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.552315 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.554934 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.561092 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.739261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9kf\" (UniqueName: \"kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf\") pod \"mariadb-client\" (UID: \"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62\") " pod="openstack/mariadb-client" Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.840561 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l9kf\" (UniqueName: \"kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf\") pod \"mariadb-client\" (UID: \"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62\") " pod="openstack/mariadb-client" Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.873353 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l9kf\" (UniqueName: \"kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf\") pod \"mariadb-client\" (UID: \"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62\") " pod="openstack/mariadb-client" Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.885003 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.073658 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sbbcl"] Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.080564 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sbbcl"] Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.143395 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.492416 5136 generic.go:334] "Generic (PLEG): container finished" podID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" containerID="18976fde7b0e8720c3912ec558d2b411507101e156ae43c4e540472db0f27db1" exitCode=0 Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.492491 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62","Type":"ContainerDied","Data":"18976fde7b0e8720c3912ec558d2b411507101e156ae43c4e540472db0f27db1"} Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.492544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62","Type":"ContainerStarted","Data":"fbcb77051c8db7e2a986fe94d3dfcafb7ff812751abf891da7573c2b247b8f1d"} Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.406983 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25720ab-064e-40ce-ae93-03dd9c33cf66" path="/var/lib/kubelet/pods/a25720ab-064e-40ce-ae93-03dd9c33cf66/volumes" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.750225 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.772244 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62/mariadb-client/0.log" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.801163 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.806832 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.874172 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9kf\" (UniqueName: \"kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf\") pod \"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62\" (UID: \"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62\") " Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.878299 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf" (OuterVolumeSpecName: "kube-api-access-2l9kf") pod "816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" (UID: "816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62"). InnerVolumeSpecName "kube-api-access-2l9kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.961797 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:12 crc kubenswrapper[5136]: E0320 08:43:12.962766 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" containerName="mariadb-client" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.962786 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" containerName="mariadb-client" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.963053 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" containerName="mariadb-client" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.963562 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.968842 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.979569 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l9kf\" (UniqueName: \"kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.081222 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn5j2\" (UniqueName: \"kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2\") pod \"mariadb-client\" (UID: \"14707dd3-6d0b-4720-aeb1-f92f46c97812\") " pod="openstack/mariadb-client" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.183100 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn5j2\" (UniqueName: \"kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2\") pod \"mariadb-client\" (UID: \"14707dd3-6d0b-4720-aeb1-f92f46c97812\") " pod="openstack/mariadb-client" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.213656 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn5j2\" (UniqueName: \"kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2\") pod \"mariadb-client\" (UID: \"14707dd3-6d0b-4720-aeb1-f92f46c97812\") " pod="openstack/mariadb-client" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.289199 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.509774 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbcb77051c8db7e2a986fe94d3dfcafb7ff812751abf891da7573c2b247b8f1d" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.510087 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.537865 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" podUID="14707dd3-6d0b-4720-aeb1-f92f46c97812" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.557132 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:13 crc kubenswrapper[5136]: W0320 08:43:13.563085 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14707dd3_6d0b_4720_aeb1_f92f46c97812.slice/crio-7d4cde7afbb7115163b0ec3aae84ceafb83bb9aa037af2310fec9dbcd1e2706a WatchSource:0}: Error finding container 7d4cde7afbb7115163b0ec3aae84ceafb83bb9aa037af2310fec9dbcd1e2706a: Status 404 returned error can't find the container with id 7d4cde7afbb7115163b0ec3aae84ceafb83bb9aa037af2310fec9dbcd1e2706a Mar 20 08:43:14 crc kubenswrapper[5136]: I0320 08:43:14.416026 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" path="/var/lib/kubelet/pods/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62/volumes" Mar 20 08:43:14 crc kubenswrapper[5136]: I0320 08:43:14.517800 5136 generic.go:334] "Generic (PLEG): container finished" podID="14707dd3-6d0b-4720-aeb1-f92f46c97812" containerID="c1c133ed294395cb16b8da83cdd09b7bc0d67e81462296562eb505d9f9f44e6f" exitCode=0 Mar 20 08:43:14 crc kubenswrapper[5136]: I0320 08:43:14.517906 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"14707dd3-6d0b-4720-aeb1-f92f46c97812","Type":"ContainerDied","Data":"c1c133ed294395cb16b8da83cdd09b7bc0d67e81462296562eb505d9f9f44e6f"} Mar 20 08:43:14 crc kubenswrapper[5136]: I0320 08:43:14.517949 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"14707dd3-6d0b-4720-aeb1-f92f46c97812","Type":"ContainerStarted","Data":"7d4cde7afbb7115163b0ec3aae84ceafb83bb9aa037af2310fec9dbcd1e2706a"} Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.821699 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.822041 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.822077 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.822650 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.822694 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" gracePeriod=600 Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.874777 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.894331 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_14707dd3-6d0b-4720-aeb1-f92f46c97812/mariadb-client/0.log" Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.923171 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.929043 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:15 crc kubenswrapper[5136]: E0320 08:43:15.952321 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.028355 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn5j2\" (UniqueName: \"kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2\") pod \"14707dd3-6d0b-4720-aeb1-f92f46c97812\" (UID: \"14707dd3-6d0b-4720-aeb1-f92f46c97812\") " Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.039062 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2" (OuterVolumeSpecName: "kube-api-access-vn5j2") pod "14707dd3-6d0b-4720-aeb1-f92f46c97812" (UID: "14707dd3-6d0b-4720-aeb1-f92f46c97812"). InnerVolumeSpecName "kube-api-access-vn5j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.130188 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn5j2\" (UniqueName: \"kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.414277 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14707dd3-6d0b-4720-aeb1-f92f46c97812" path="/var/lib/kubelet/pods/14707dd3-6d0b-4720-aeb1-f92f46c97812/volumes" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.543711 5136 scope.go:117] "RemoveContainer" containerID="c1c133ed294395cb16b8da83cdd09b7bc0d67e81462296562eb505d9f9f44e6f" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.543922 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.555259 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" exitCode=0 Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.555309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a"} Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.555784 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:43:16 crc kubenswrapper[5136]: E0320 08:43:16.556084 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.571706 5136 scope.go:117] "RemoveContainer" containerID="fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271" Mar 20 08:43:30 crc kubenswrapper[5136]: I0320 08:43:30.396343 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:43:30 crc kubenswrapper[5136]: E0320 08:43:30.397270 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:43:38 crc kubenswrapper[5136]: I0320 08:43:38.344178 5136 scope.go:117] "RemoveContainer" containerID="77614674db5f14222adee033ce4bf5c60259ff8d124c2f5a8301de91d769caa0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.302623 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 08:43:45 crc kubenswrapper[5136]: E0320 08:43:45.303544 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14707dd3-6d0b-4720-aeb1-f92f46c97812" containerName="mariadb-client" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.303561 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="14707dd3-6d0b-4720-aeb1-f92f46c97812" containerName="mariadb-client" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.303763 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="14707dd3-6d0b-4720-aeb1-f92f46c97812" containerName="mariadb-client" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.304769 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.306593 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.307231 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-sww5j" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.307511 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.307796 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.308941 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.317434 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.324497 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.325928 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.335498 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.337064 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.360610 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.378005 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.397755 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:43:45 crc kubenswrapper[5136]: E0320 08:43:45.398024 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437023 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437063 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p255f\" (UniqueName: \"kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437089 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hznvw\" (UniqueName: \"kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437130 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437146 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437167 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437184 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437209 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437364 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437475 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437546 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437602 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437624 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437643 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538649 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538705 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538736 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538752 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538775 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538795 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538861 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538896 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538924 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538947 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539020 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539087 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzrr5\" (UniqueName: \"kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539113 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539134 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539205 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539235 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539273 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539304 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539327 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539359 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p255f\" (UniqueName: \"kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539386 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hznvw\" (UniqueName: \"kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539786 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.540531 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.540586 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.540616 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.540907 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.540907 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.545581 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.545851 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.546179 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.546351 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.546390 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/55c96259ee5b46df72e29f8b4f8354d28810ccf4af15b8156942b05dcca234d3/globalmount\"" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.546411 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.546441 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/496a4242cd77f4ae3e2362330edd572399213df1c8364c538f89c4da6118351c/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.547483 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.551449 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.553471 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.563151 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p255f\" (UniqueName: \"kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.567096 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hznvw\" (UniqueName: \"kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.580728 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.584369 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.637044 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640683 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640749 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640780 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzrr5\" (UniqueName: \"kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640811 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640885 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640910 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640932 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640949 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.642034 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.642296 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.642949 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.645211 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.645241 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1b8b1bec848a48f216534b795762c346ec36e6b88f5e71f6ea069d96e42de4bb/globalmount\"" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.645288 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.646454 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.646887 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.658964 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.661183 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzrr5\" (UniqueName: \"kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.675888 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.966593 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.204693 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.210037 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.260014 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.262831 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.266519 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.266647 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.266704 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.266787 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pn52r" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.269356 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.295999 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 08:43:46 crc kubenswrapper[5136]: W0320 08:43:46.321033 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2e6ed27_57ea_4ea9_9d66_e1088b5a07d4.slice/crio-3ed0d6ce099ff1f367dca760d285f74c76c709803b8a996aaf1abb5243af3234 WatchSource:0}: Error finding container 3ed0d6ce099ff1f367dca760d285f74c76c709803b8a996aaf1abb5243af3234: Status 404 returned error can't find the container with id 3ed0d6ce099ff1f367dca760d285f74c76c709803b8a996aaf1abb5243af3234 Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.323498 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.324125 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.328097 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350027 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350084 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350155 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350174 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350211 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c7dl\" (UniqueName: \"kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350228 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350258 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350276 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350520 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.361404 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.367375 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451639 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c7dl\" (UniqueName: \"kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451801 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451852 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451930 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451960 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451992 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452024 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452044 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452066 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452083 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452131 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452151 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452183 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452210 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452241 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452288 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xctk\" (UniqueName: \"kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452316 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452333 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdn2l\" (UniqueName: \"kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452350 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452376 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452395 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452427 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452719 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.453735 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.455177 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.458537 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.458574 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/628e8694e94b6b991b58eb025a6326c93380697a5f5207dd738b0664b132a053/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.458829 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.458934 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.459079 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.473889 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c7dl\" (UniqueName: \"kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.476344 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.496033 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553673 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553716 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553776 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553798 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553827 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553883 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553902 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553925 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553960 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553979 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.554017 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xctk\" (UniqueName: \"kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.554033 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.554052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdn2l\" (UniqueName: \"kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.554074 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.554132 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.555254 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.555675 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.556002 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.556484 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.556553 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.556575 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b506eb6aeafb6e888123d3ce737c799a4338b90b876c36cf088ffddcc411fa0a/globalmount\"" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.556605 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.557427 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.558965 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.559527 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.559556 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ef4f0d2c9cdb4aa595550fee76d7e40469fd109f31b60498ae55a6d92861ae4a/globalmount\"" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.559658 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.559727 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.561220 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.563688 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.564301 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.589419 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xctk\" (UniqueName: \"kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.591436 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.603678 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.605464 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdn2l\" (UniqueName: \"kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.609940 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.650522 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.658108 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.817657 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerStarted","Data":"5870d6b24a1657a079a89b9e9211d461a22b66269225de506dabd34bacc879f1"} Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.826273 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerStarted","Data":"3ed0d6ce099ff1f367dca760d285f74c76c709803b8a996aaf1abb5243af3234"} Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.830164 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerStarted","Data":"e05bc317ee3c118e98b670a0ea0d818712ef16b644c13d4a02ef27c03d16c608"} Mar 20 08:43:47 crc kubenswrapper[5136]: I0320 08:43:47.121152 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 08:43:47 crc kubenswrapper[5136]: W0320 08:43:47.127841 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda276ba4e_bbab_4a83_8fd2_d77573782aa6.slice/crio-56e76ebe8499616e7e38322f1f1fa612b42b6c40fb5b893b8175391424853f23 WatchSource:0}: Error finding container 56e76ebe8499616e7e38322f1f1fa612b42b6c40fb5b893b8175391424853f23: Status 404 returned error can't find the container with id 56e76ebe8499616e7e38322f1f1fa612b42b6c40fb5b893b8175391424853f23 Mar 20 08:43:47 crc kubenswrapper[5136]: I0320 08:43:47.222681 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 08:43:47 crc kubenswrapper[5136]: W0320 08:43:47.231037 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e0c945f_6773_4bf8_872d_7eb5110de79f.slice/crio-f9e584984a933fdd738d890b942f2b5e0effc03a45b854e4fbf95b8869495e71 WatchSource:0}: Error finding container f9e584984a933fdd738d890b942f2b5e0effc03a45b854e4fbf95b8869495e71: Status 404 returned error can't find the container with id f9e584984a933fdd738d890b942f2b5e0effc03a45b854e4fbf95b8869495e71 Mar 20 08:43:47 crc kubenswrapper[5136]: I0320 08:43:47.837425 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerStarted","Data":"56e76ebe8499616e7e38322f1f1fa612b42b6c40fb5b893b8175391424853f23"} Mar 20 08:43:47 crc kubenswrapper[5136]: I0320 08:43:47.839243 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"7e0c945f-6773-4bf8-872d-7eb5110de79f","Type":"ContainerStarted","Data":"f9e584984a933fdd738d890b942f2b5e0effc03a45b854e4fbf95b8869495e71"} Mar 20 08:43:48 crc kubenswrapper[5136]: I0320 08:43:48.478953 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 08:43:48 crc kubenswrapper[5136]: I0320 08:43:48.864881 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerStarted","Data":"e68e48705b4cdb3e57af6e933adb8006e7437ee5218f249bb6e11769fe0ee800"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.881063 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerStarted","Data":"8fc531019af740166b849284cf77209f3d16d2b70c219b2f80048bdb08d14be3"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.881653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerStarted","Data":"c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.889997 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerStarted","Data":"aa7d63dd9f5d69196ca03f337b7c8a99ee8f9a0db5fd272afae4861760ebba16"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.890042 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerStarted","Data":"f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.893420 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerStarted","Data":"6504065e281b8c5e6e76cf9517fba24d633b6c7805c447e42fbc49093a42beeb"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.893468 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerStarted","Data":"50054ee6901ece61fe4a75813bd8a9abcbe38aad68d2fc8adceaeed3cbddce45"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.896104 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerStarted","Data":"adb0722e1140982d66b6bcc4b53d108b1a1da62c36d81757cff1dc3b6b31b52c"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.896221 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerStarted","Data":"4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.898941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"7e0c945f-6773-4bf8-872d-7eb5110de79f","Type":"ContainerStarted","Data":"49b205017f1afa7852c73b13644c48ef8382cbecd7e8c8de906f466d5717a06f"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.901757 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerStarted","Data":"2ffece60b271290211f6f3963d1642000676cfce31547f3f28dd8ecf96867815"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.901796 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerStarted","Data":"6359501b6448986da36467b6a23a3fd5909f20740da745780317887a779e734a"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.909873 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.082371376 podStartE2EDuration="5.909853298s" podCreationTimestamp="2026-03-20 08:43:45 +0000 UTC" firstStartedPulling="2026-03-20 08:43:48.487595391 +0000 UTC m=+6860.746906542" lastFinishedPulling="2026-03-20 08:43:50.315077313 +0000 UTC m=+6862.574388464" observedRunningTime="2026-03-20 08:43:50.908201756 +0000 UTC m=+6863.167512927" watchObservedRunningTime="2026-03-20 08:43:50.909853298 +0000 UTC m=+6863.169164449" Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.936923 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=2.827295423 podStartE2EDuration="5.936902177s" podCreationTimestamp="2026-03-20 08:43:45 +0000 UTC" firstStartedPulling="2026-03-20 08:43:47.131467414 +0000 UTC m=+6859.390778565" lastFinishedPulling="2026-03-20 08:43:50.241074168 +0000 UTC m=+6862.500385319" observedRunningTime="2026-03-20 08:43:50.929613861 +0000 UTC m=+6863.188925022" watchObservedRunningTime="2026-03-20 08:43:50.936902177 +0000 UTC m=+6863.196213328" Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.955576 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.101116413 podStartE2EDuration="6.955558416s" podCreationTimestamp="2026-03-20 08:43:44 +0000 UTC" firstStartedPulling="2026-03-20 08:43:46.32978064 +0000 UTC m=+6858.589091791" lastFinishedPulling="2026-03-20 08:43:50.184222643 +0000 UTC m=+6862.443533794" observedRunningTime="2026-03-20 08:43:50.953868454 +0000 UTC m=+6863.213179595" watchObservedRunningTime="2026-03-20 08:43:50.955558416 +0000 UTC m=+6863.214869567" Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.967586 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.973835 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.277810595 podStartE2EDuration="6.973802612s" podCreationTimestamp="2026-03-20 08:43:44 +0000 UTC" firstStartedPulling="2026-03-20 08:43:46.487943607 +0000 UTC m=+6858.747254758" lastFinishedPulling="2026-03-20 08:43:50.183935624 +0000 UTC m=+6862.443246775" observedRunningTime="2026-03-20 08:43:50.970957414 +0000 UTC m=+6863.230268585" watchObservedRunningTime="2026-03-20 08:43:50.973802612 +0000 UTC m=+6863.233113763" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.002295 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.991118539 podStartE2EDuration="7.002279736s" podCreationTimestamp="2026-03-20 08:43:44 +0000 UTC" firstStartedPulling="2026-03-20 08:43:46.209760575 +0000 UTC m=+6858.469071726" lastFinishedPulling="2026-03-20 08:43:50.220921772 +0000 UTC m=+6862.480232923" observedRunningTime="2026-03-20 08:43:50.994448082 +0000 UTC m=+6863.253759233" watchObservedRunningTime="2026-03-20 08:43:51.002279736 +0000 UTC m=+6863.261590887" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.592311 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.638092 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.659787 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.659919 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.933561 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"7e0c945f-6773-4bf8-872d-7eb5110de79f","Type":"ContainerStarted","Data":"ff7aad450b5bf148e0d8e2a6a1a41eb2960ad7d591108755ada3cf41b5ab3619"} Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.957864 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.483118071 podStartE2EDuration="6.957846544s" podCreationTimestamp="2026-03-20 08:43:45 +0000 UTC" firstStartedPulling="2026-03-20 08:43:47.233767598 +0000 UTC m=+6859.493078749" lastFinishedPulling="2026-03-20 08:43:50.708496071 +0000 UTC m=+6862.967807222" observedRunningTime="2026-03-20 08:43:51.952125917 +0000 UTC m=+6864.211437078" watchObservedRunningTime="2026-03-20 08:43:51.957846544 +0000 UTC m=+6864.217157695" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.967123 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:52 crc kubenswrapper[5136]: I0320 08:43:52.593272 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:52 crc kubenswrapper[5136]: I0320 08:43:52.651613 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:52 crc kubenswrapper[5136]: I0320 08:43:52.658927 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:54 crc kubenswrapper[5136]: I0320 08:43:54.676886 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:54 crc kubenswrapper[5136]: I0320 08:43:54.677500 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:54 crc kubenswrapper[5136]: I0320 08:43:54.701244 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:54 crc kubenswrapper[5136]: I0320 08:43:54.702185 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.001162 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.040339 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.311416 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.315405 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.364444 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.375404 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.414025 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.414271 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.414388 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.414561 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgg4l\" (UniqueName: \"kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.516646 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgg4l\" (UniqueName: \"kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.516751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.516793 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.516832 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.517648 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.518401 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.518902 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.534443 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgg4l\" (UniqueName: \"kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.637598 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.682424 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.695353 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.703570 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.712625 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.713653 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.721060 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.730285 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.803557 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.841158 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.201615 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.226015 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.227873 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.230172 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.241076 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.273530 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.334182 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.334355 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.334502 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.334606 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.334688 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snk2j\" (UniqueName: \"kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.436877 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.436955 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.437017 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snk2j\" (UniqueName: \"kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.437076 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.437760 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.437919 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.438397 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.439282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.440133 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.460597 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snk2j\" (UniqueName: \"kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.559905 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.980315 5136 generic.go:334] "Generic (PLEG): container finished" podID="0eedd685-b07d-42b2-b7d7-94d10fbb7500" containerID="8cc817dfc7c497e6a23462bb83921054c2b5f523e401e310e81939eba77ad619" exitCode=0 Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.980387 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" event={"ID":"0eedd685-b07d-42b2-b7d7-94d10fbb7500","Type":"ContainerDied","Data":"8cc817dfc7c497e6a23462bb83921054c2b5f523e401e310e81939eba77ad619"} Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.980808 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" event={"ID":"0eedd685-b07d-42b2-b7d7-94d10fbb7500","Type":"ContainerStarted","Data":"caff4e6f120f85a12ee92017b89baedc55dfa14fc848a2243eb2f5949805f777"} Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.051430 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:43:57 crc kubenswrapper[5136]: W0320 08:43:57.061601 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80ba4d56_2bee_4ab9_9acd_c7588d675a4b.slice/crio-167230fdb5149a52edd1822e02c35390c6ca18f21c97fbb6e377a5b71350b091 WatchSource:0}: Error finding container 167230fdb5149a52edd1822e02c35390c6ca18f21c97fbb6e377a5b71350b091: Status 404 returned error can't find the container with id 167230fdb5149a52edd1822e02c35390c6ca18f21c97fbb6e377a5b71350b091 Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.246622 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.357362 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgg4l\" (UniqueName: \"kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l\") pod \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.357425 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb\") pod \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.357641 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config\") pod \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.357668 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc\") pod \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.361684 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l" (OuterVolumeSpecName: "kube-api-access-pgg4l") pod "0eedd685-b07d-42b2-b7d7-94d10fbb7500" (UID: "0eedd685-b07d-42b2-b7d7-94d10fbb7500"). InnerVolumeSpecName "kube-api-access-pgg4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.376041 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0eedd685-b07d-42b2-b7d7-94d10fbb7500" (UID: "0eedd685-b07d-42b2-b7d7-94d10fbb7500"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.377496 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config" (OuterVolumeSpecName: "config") pod "0eedd685-b07d-42b2-b7d7-94d10fbb7500" (UID: "0eedd685-b07d-42b2-b7d7-94d10fbb7500"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.388521 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0eedd685-b07d-42b2-b7d7-94d10fbb7500" (UID: "0eedd685-b07d-42b2-b7d7-94d10fbb7500"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.397237 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:43:57 crc kubenswrapper[5136]: E0320 08:43:57.398010 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.459736 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.459772 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.459785 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgg4l\" (UniqueName: \"kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.459798 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.991988 5136 generic.go:334] "Generic (PLEG): container finished" podID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerID="e3d5db568a0a051b325af6f8c22b6c105820123adce2d9ee29ab549861506fd4" exitCode=0 Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.992074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" event={"ID":"80ba4d56-2bee-4ab9-9acd-c7588d675a4b","Type":"ContainerDied","Data":"e3d5db568a0a051b325af6f8c22b6c105820123adce2d9ee29ab549861506fd4"} Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.992102 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" event={"ID":"80ba4d56-2bee-4ab9-9acd-c7588d675a4b","Type":"ContainerStarted","Data":"167230fdb5149a52edd1822e02c35390c6ca18f21c97fbb6e377a5b71350b091"} Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.996793 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" event={"ID":"0eedd685-b07d-42b2-b7d7-94d10fbb7500","Type":"ContainerDied","Data":"caff4e6f120f85a12ee92017b89baedc55dfa14fc848a2243eb2f5949805f777"} Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.996866 5136 scope.go:117] "RemoveContainer" containerID="8cc817dfc7c497e6a23462bb83921054c2b5f523e401e310e81939eba77ad619" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.996944 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.212126 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.241909 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.407736 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eedd685-b07d-42b2-b7d7-94d10fbb7500" path="/var/lib/kubelet/pods/0eedd685-b07d-42b2-b7d7-94d10fbb7500/volumes" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.800845 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Mar 20 08:43:58 crc kubenswrapper[5136]: E0320 08:43:58.801167 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eedd685-b07d-42b2-b7d7-94d10fbb7500" containerName="init" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.801183 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eedd685-b07d-42b2-b7d7-94d10fbb7500" containerName="init" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.801341 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eedd685-b07d-42b2-b7d7-94d10fbb7500" containerName="init" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.801882 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.804028 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.808372 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.988922 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkx6j\" (UniqueName: \"kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.988986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.989350 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.006036 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" event={"ID":"80ba4d56-2bee-4ab9-9acd-c7588d675a4b","Type":"ContainerStarted","Data":"fa96e658106264d429d004b45b4bacbef4dfbc220f9941975579c5341fa77132"} Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.006185 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.022525 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" podStartSLOduration=3.022506944 podStartE2EDuration="3.022506944s" podCreationTimestamp="2026-03-20 08:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:43:59.021327387 +0000 UTC m=+6871.280638538" watchObservedRunningTime="2026-03-20 08:43:59.022506944 +0000 UTC m=+6871.281818095" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.091058 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.091184 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkx6j\" (UniqueName: \"kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.091230 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.095064 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.095113 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f22aae1ccf05e63f6579bb99a16fd344875c34039a4f43ed5d40a64cbfffb0e7/globalmount\"" pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.095546 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.108707 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkx6j\" (UniqueName: \"kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.124268 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.426252 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.964324 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.017196 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c068d291-989b-4247-8cee-0596033c8ce5","Type":"ContainerStarted","Data":"e581eb3896caa8dce4da5d70ae2539c97df467c09420153c45b9ba77109b2e63"} Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.133103 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566604-xrrdd"] Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.134452 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.137199 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.137788 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.137984 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.142727 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566604-xrrdd"] Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.314682 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqlbm\" (UniqueName: \"kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm\") pod \"auto-csr-approver-29566604-xrrdd\" (UID: \"372179a0-537a-4126-97c1-2d6a045e8798\") " pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.417189 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqlbm\" (UniqueName: \"kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm\") pod \"auto-csr-approver-29566604-xrrdd\" (UID: \"372179a0-537a-4126-97c1-2d6a045e8798\") " pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.437959 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqlbm\" (UniqueName: \"kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm\") pod \"auto-csr-approver-29566604-xrrdd\" (UID: \"372179a0-537a-4126-97c1-2d6a045e8798\") " pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.455140 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.855731 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566604-xrrdd"] Mar 20 08:44:00 crc kubenswrapper[5136]: W0320 08:44:00.865227 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod372179a0_537a_4126_97c1_2d6a045e8798.slice/crio-9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd WatchSource:0}: Error finding container 9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd: Status 404 returned error can't find the container with id 9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd Mar 20 08:44:01 crc kubenswrapper[5136]: I0320 08:44:01.025981 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c068d291-989b-4247-8cee-0596033c8ce5","Type":"ContainerStarted","Data":"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386"} Mar 20 08:44:01 crc kubenswrapper[5136]: I0320 08:44:01.027843 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" event={"ID":"372179a0-537a-4126-97c1-2d6a045e8798","Type":"ContainerStarted","Data":"9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd"} Mar 20 08:44:01 crc kubenswrapper[5136]: I0320 08:44:01.042226 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.858317195 podStartE2EDuration="4.04220654s" podCreationTimestamp="2026-03-20 08:43:57 +0000 UTC" firstStartedPulling="2026-03-20 08:43:59.969002412 +0000 UTC m=+6872.228313563" lastFinishedPulling="2026-03-20 08:44:00.152891757 +0000 UTC m=+6872.412202908" observedRunningTime="2026-03-20 08:44:01.037095501 +0000 UTC m=+6873.296406672" watchObservedRunningTime="2026-03-20 08:44:01.04220654 +0000 UTC m=+6873.301517691" Mar 20 08:44:03 crc kubenswrapper[5136]: I0320 08:44:03.047234 5136 generic.go:334] "Generic (PLEG): container finished" podID="372179a0-537a-4126-97c1-2d6a045e8798" containerID="a7d9dee7dfd341c20d54bcc9a10648dd04c5eaeec50a978661f3c530263c499e" exitCode=0 Mar 20 08:44:03 crc kubenswrapper[5136]: I0320 08:44:03.047280 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" event={"ID":"372179a0-537a-4126-97c1-2d6a045e8798","Type":"ContainerDied","Data":"a7d9dee7dfd341c20d54bcc9a10648dd04c5eaeec50a978661f3c530263c499e"} Mar 20 08:44:04 crc kubenswrapper[5136]: I0320 08:44:04.420946 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:04 crc kubenswrapper[5136]: I0320 08:44:04.590641 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqlbm\" (UniqueName: \"kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm\") pod \"372179a0-537a-4126-97c1-2d6a045e8798\" (UID: \"372179a0-537a-4126-97c1-2d6a045e8798\") " Mar 20 08:44:04 crc kubenswrapper[5136]: I0320 08:44:04.599167 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm" (OuterVolumeSpecName: "kube-api-access-sqlbm") pod "372179a0-537a-4126-97c1-2d6a045e8798" (UID: "372179a0-537a-4126-97c1-2d6a045e8798"). InnerVolumeSpecName "kube-api-access-sqlbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:04 crc kubenswrapper[5136]: I0320 08:44:04.692535 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqlbm\" (UniqueName: \"kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:05 crc kubenswrapper[5136]: I0320 08:44:05.067251 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" event={"ID":"372179a0-537a-4126-97c1-2d6a045e8798","Type":"ContainerDied","Data":"9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd"} Mar 20 08:44:05 crc kubenswrapper[5136]: I0320 08:44:05.067289 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd" Mar 20 08:44:05 crc kubenswrapper[5136]: I0320 08:44:05.067304 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:05 crc kubenswrapper[5136]: I0320 08:44:05.494905 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566598-9zdgt"] Mar 20 08:44:05 crc kubenswrapper[5136]: I0320 08:44:05.501414 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566598-9zdgt"] Mar 20 08:44:06 crc kubenswrapper[5136]: I0320 08:44:06.404434 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9379207f-99bf-4561-8979-f27be8f510ac" path="/var/lib/kubelet/pods/9379207f-99bf-4561-8979-f27be8f510ac/volumes" Mar 20 08:44:06 crc kubenswrapper[5136]: I0320 08:44:06.561550 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:44:06 crc kubenswrapper[5136]: I0320 08:44:06.631134 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:44:06 crc kubenswrapper[5136]: I0320 08:44:06.631392 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="dnsmasq-dns" containerID="cri-o://8c3b72a05088d18b83e8fd4c523a6250996da641765be00333d607a1dfe71673" gracePeriod=10 Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.085711 5136 generic.go:334] "Generic (PLEG): container finished" podID="e61df6ca-2419-400a-8790-9695f75c6d92" containerID="8c3b72a05088d18b83e8fd4c523a6250996da641765be00333d607a1dfe71673" exitCode=0 Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.085760 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" event={"ID":"e61df6ca-2419-400a-8790-9695f75c6d92","Type":"ContainerDied","Data":"8c3b72a05088d18b83e8fd4c523a6250996da641765be00333d607a1dfe71673"} Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.085787 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" event={"ID":"e61df6ca-2419-400a-8790-9695f75c6d92","Type":"ContainerDied","Data":"5f995642f784fe24cc982d1a64669bee0b35f9981d3b63db2aa6b8236cd2ea18"} Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.085798 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f995642f784fe24cc982d1a64669bee0b35f9981d3b63db2aa6b8236cd2ea18" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.094568 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.246141 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc\") pod \"e61df6ca-2419-400a-8790-9695f75c6d92\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.246617 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxv65\" (UniqueName: \"kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65\") pod \"e61df6ca-2419-400a-8790-9695f75c6d92\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.247283 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config\") pod \"e61df6ca-2419-400a-8790-9695f75c6d92\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.255619 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65" (OuterVolumeSpecName: "kube-api-access-kxv65") pod "e61df6ca-2419-400a-8790-9695f75c6d92" (UID: "e61df6ca-2419-400a-8790-9695f75c6d92"). InnerVolumeSpecName "kube-api-access-kxv65". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.295140 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e61df6ca-2419-400a-8790-9695f75c6d92" (UID: "e61df6ca-2419-400a-8790-9695f75c6d92"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.296036 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config" (OuterVolumeSpecName: "config") pod "e61df6ca-2419-400a-8790-9695f75c6d92" (UID: "e61df6ca-2419-400a-8790-9695f75c6d92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.348973 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.349015 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxv65\" (UniqueName: \"kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.349028 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.093723 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.144149 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.152759 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.400694 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:44:08 crc kubenswrapper[5136]: E0320 08:44:08.401277 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.405801 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" path="/var/lib/kubelet/pods/e61df6ca-2419-400a-8790-9695f75c6d92/volumes" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.855728 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 20 08:44:08 crc kubenswrapper[5136]: E0320 08:44:08.856556 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="init" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.856577 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="init" Mar 20 08:44:08 crc kubenswrapper[5136]: E0320 08:44:08.856594 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="dnsmasq-dns" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.856600 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="dnsmasq-dns" Mar 20 08:44:08 crc kubenswrapper[5136]: E0320 08:44:08.856616 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372179a0-537a-4126-97c1-2d6a045e8798" containerName="oc" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.856624 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="372179a0-537a-4126-97c1-2d6a045e8798" containerName="oc" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.857009 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="dnsmasq-dns" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.857033 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="372179a0-537a-4126-97c1-2d6a045e8798" containerName="oc" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.858872 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.871592 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.871901 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.871897 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-tk76c" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.872669 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.903987 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.975984 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976080 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976106 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976131 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976183 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976204 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7g9\" (UniqueName: \"kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976238 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078116 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078384 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078488 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078615 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078714 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7g9\" (UniqueName: \"kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078797 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078939 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.080106 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.080366 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.081139 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.085697 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.085710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.091589 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.096627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7g9\" (UniqueName: \"kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.202136 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.707338 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 08:44:09 crc kubenswrapper[5136]: W0320 08:44:09.718724 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22659681_bc2b_4056_81d6_96b046e45712.slice/crio-967dc1b56184016d81cec7d8f7dbb1450bc76817bce0736f60753fc52d0a9ea2 WatchSource:0}: Error finding container 967dc1b56184016d81cec7d8f7dbb1450bc76817bce0736f60753fc52d0a9ea2: Status 404 returned error can't find the container with id 967dc1b56184016d81cec7d8f7dbb1450bc76817bce0736f60753fc52d0a9ea2 Mar 20 08:44:10 crc kubenswrapper[5136]: I0320 08:44:10.109958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerStarted","Data":"967dc1b56184016d81cec7d8f7dbb1450bc76817bce0736f60753fc52d0a9ea2"} Mar 20 08:44:11 crc kubenswrapper[5136]: I0320 08:44:11.119546 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerStarted","Data":"43d8180654ac711b0a6c655f92be552ad6bb0d4e4426596385b695958afa2b74"} Mar 20 08:44:11 crc kubenswrapper[5136]: I0320 08:44:11.119905 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerStarted","Data":"491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d"} Mar 20 08:44:11 crc kubenswrapper[5136]: I0320 08:44:11.119930 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 20 08:44:11 crc kubenswrapper[5136]: I0320 08:44:11.144665 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.436347217 podStartE2EDuration="3.144616713s" podCreationTimestamp="2026-03-20 08:44:08 +0000 UTC" firstStartedPulling="2026-03-20 08:44:09.722577911 +0000 UTC m=+6881.981889062" lastFinishedPulling="2026-03-20 08:44:10.430847407 +0000 UTC m=+6882.690158558" observedRunningTime="2026-03-20 08:44:11.139793064 +0000 UTC m=+6883.399104215" watchObservedRunningTime="2026-03-20 08:44:11.144616713 +0000 UTC m=+6883.403927874" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.515309 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-blnd4"] Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.517233 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.525074 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3614-account-create-update-dp5t6"] Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.526236 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.527603 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.533268 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-blnd4"] Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.540665 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3614-account-create-update-dp5t6"] Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.619171 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwxt\" (UniqueName: \"kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.619224 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-456dr\" (UniqueName: \"kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.619278 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.619330 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.738116 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.738755 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pwxt\" (UniqueName: \"kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.738870 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-456dr\" (UniqueName: \"kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.739041 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.740222 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.740311 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.758276 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-456dr\" (UniqueName: \"kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.758366 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pwxt\" (UniqueName: \"kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.837936 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.863247 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:17 crc kubenswrapper[5136]: I0320 08:44:17.300168 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-blnd4"] Mar 20 08:44:17 crc kubenswrapper[5136]: I0320 08:44:17.332965 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3614-account-create-update-dp5t6"] Mar 20 08:44:17 crc kubenswrapper[5136]: W0320 08:44:17.338651 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fb13f3a_3785_4650_8381_e4d5e6fa7f73.slice/crio-7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068 WatchSource:0}: Error finding container 7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068: Status 404 returned error can't find the container with id 7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068 Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.195055 5136 generic.go:334] "Generic (PLEG): container finished" podID="0fb13f3a-3785-4650-8381-e4d5e6fa7f73" containerID="ae45294b801e93d47563db9ba4054a170a4f53699928ebbc069e3e19b4610e4f" exitCode=0 Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.195146 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3614-account-create-update-dp5t6" event={"ID":"0fb13f3a-3785-4650-8381-e4d5e6fa7f73","Type":"ContainerDied","Data":"ae45294b801e93d47563db9ba4054a170a4f53699928ebbc069e3e19b4610e4f"} Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.195309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3614-account-create-update-dp5t6" event={"ID":"0fb13f3a-3785-4650-8381-e4d5e6fa7f73","Type":"ContainerStarted","Data":"7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068"} Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.196771 5136 generic.go:334] "Generic (PLEG): container finished" podID="0749652f-3995-4e34-ba17-55eac4c3530c" containerID="943e6011fb2bb8f85aa7e1232523d7da6d707090421691ca85ab0e7998c29b98" exitCode=0 Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.196803 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-blnd4" event={"ID":"0749652f-3995-4e34-ba17-55eac4c3530c","Type":"ContainerDied","Data":"943e6011fb2bb8f85aa7e1232523d7da6d707090421691ca85ab0e7998c29b98"} Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.196841 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-blnd4" event={"ID":"0749652f-3995-4e34-ba17-55eac4c3530c","Type":"ContainerStarted","Data":"2ed2716e588c929d3d5d3b916211215f75c49b755000a58dfb712d9c8fa0d264"} Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.555800 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.566158 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.606607 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts\") pod \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.606727 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-456dr\" (UniqueName: \"kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr\") pod \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.606758 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pwxt\" (UniqueName: \"kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt\") pod \"0749652f-3995-4e34-ba17-55eac4c3530c\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.607005 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts\") pod \"0749652f-3995-4e34-ba17-55eac4c3530c\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.607587 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fb13f3a-3785-4650-8381-e4d5e6fa7f73" (UID: "0fb13f3a-3785-4650-8381-e4d5e6fa7f73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.607786 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0749652f-3995-4e34-ba17-55eac4c3530c" (UID: "0749652f-3995-4e34-ba17-55eac4c3530c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.613302 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt" (OuterVolumeSpecName: "kube-api-access-4pwxt") pod "0749652f-3995-4e34-ba17-55eac4c3530c" (UID: "0749652f-3995-4e34-ba17-55eac4c3530c"). InnerVolumeSpecName "kube-api-access-4pwxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.613369 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr" (OuterVolumeSpecName: "kube-api-access-456dr") pod "0fb13f3a-3785-4650-8381-e4d5e6fa7f73" (UID: "0fb13f3a-3785-4650-8381-e4d5e6fa7f73"). InnerVolumeSpecName "kube-api-access-456dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.709413 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-456dr\" (UniqueName: \"kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.709452 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pwxt\" (UniqueName: \"kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.709466 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.709493 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.228199 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.228200 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-blnd4" event={"ID":"0749652f-3995-4e34-ba17-55eac4c3530c","Type":"ContainerDied","Data":"2ed2716e588c929d3d5d3b916211215f75c49b755000a58dfb712d9c8fa0d264"} Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.228329 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed2716e588c929d3d5d3b916211215f75c49b755000a58dfb712d9c8fa0d264" Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.230410 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3614-account-create-update-dp5t6" event={"ID":"0fb13f3a-3785-4650-8381-e4d5e6fa7f73","Type":"ContainerDied","Data":"7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068"} Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.230448 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.230449 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068" Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.397582 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:44:20 crc kubenswrapper[5136]: E0320 08:44:20.398324 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.983572 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-62shw"] Mar 20 08:44:21 crc kubenswrapper[5136]: E0320 08:44:21.983975 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb13f3a-3785-4650-8381-e4d5e6fa7f73" containerName="mariadb-account-create-update" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.983994 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb13f3a-3785-4650-8381-e4d5e6fa7f73" containerName="mariadb-account-create-update" Mar 20 08:44:21 crc kubenswrapper[5136]: E0320 08:44:21.984014 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0749652f-3995-4e34-ba17-55eac4c3530c" containerName="mariadb-database-create" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.984021 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0749652f-3995-4e34-ba17-55eac4c3530c" containerName="mariadb-database-create" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.984175 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0749652f-3995-4e34-ba17-55eac4c3530c" containerName="mariadb-database-create" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.984188 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fb13f3a-3785-4650-8381-e4d5e6fa7f73" containerName="mariadb-account-create-update" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.984691 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.986505 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bcnhn" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.988467 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.989893 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.990202 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.999433 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-62shw"] Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.062748 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.062913 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.062955 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwsfv\" (UniqueName: \"kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.165670 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwsfv\" (UniqueName: \"kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.165781 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.165889 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.176730 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.181421 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.208457 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwsfv\" (UniqueName: \"kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.368797 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.792384 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-62shw"] Mar 20 08:44:23 crc kubenswrapper[5136]: I0320 08:44:23.253363 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62shw" event={"ID":"21e9b60d-f307-406d-9085-fbd9d8b67cf5","Type":"ContainerStarted","Data":"549f9ebb1b138869c8af30c58ac84b76e50c7d4cdb473ff81b9c92aa5b441e01"} Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.289257 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62shw" event={"ID":"21e9b60d-f307-406d-9085-fbd9d8b67cf5","Type":"ContainerStarted","Data":"ea20f727b60adf5691fc1981831b3690eb78cdb64d09999efea953786d4a4eb5"} Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.314334 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-62shw" podStartSLOduration=2.467345602 podStartE2EDuration="7.31430709s" podCreationTimestamp="2026-03-20 08:44:21 +0000 UTC" firstStartedPulling="2026-03-20 08:44:22.7992089 +0000 UTC m=+6895.058520081" lastFinishedPulling="2026-03-20 08:44:27.646170408 +0000 UTC m=+6899.905481569" observedRunningTime="2026-03-20 08:44:28.312242496 +0000 UTC m=+6900.571553657" watchObservedRunningTime="2026-03-20 08:44:28.31430709 +0000 UTC m=+6900.573618271" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.677802 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.681454 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.691416 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.777482 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.777550 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps95x\" (UniqueName: \"kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.777594 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.878584 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.878670 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps95x\" (UniqueName: \"kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.878731 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.879402 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.880075 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.912108 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps95x\" (UniqueName: \"kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:29 crc kubenswrapper[5136]: I0320 08:44:29.005305 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:29 crc kubenswrapper[5136]: I0320 08:44:29.278292 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 20 08:44:29 crc kubenswrapper[5136]: W0320 08:44:29.451379 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37fd264e_9020_4030_9f75_946d4f31cab0.slice/crio-3fe0fe28b244286c6b840a87150082973c5021674f51e6c65a29f0d0dfa6ccf7 WatchSource:0}: Error finding container 3fe0fe28b244286c6b840a87150082973c5021674f51e6c65a29f0d0dfa6ccf7: Status 404 returned error can't find the container with id 3fe0fe28b244286c6b840a87150082973c5021674f51e6c65a29f0d0dfa6ccf7 Mar 20 08:44:29 crc kubenswrapper[5136]: I0320 08:44:29.463674 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:30 crc kubenswrapper[5136]: I0320 08:44:30.305462 5136 generic.go:334] "Generic (PLEG): container finished" podID="37fd264e-9020-4030-9f75-946d4f31cab0" containerID="d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d" exitCode=0 Mar 20 08:44:30 crc kubenswrapper[5136]: I0320 08:44:30.305550 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerDied","Data":"d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d"} Mar 20 08:44:30 crc kubenswrapper[5136]: I0320 08:44:30.305598 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerStarted","Data":"3fe0fe28b244286c6b840a87150082973c5021674f51e6c65a29f0d0dfa6ccf7"} Mar 20 08:44:30 crc kubenswrapper[5136]: I0320 08:44:30.306910 5136 generic.go:334] "Generic (PLEG): container finished" podID="21e9b60d-f307-406d-9085-fbd9d8b67cf5" containerID="ea20f727b60adf5691fc1981831b3690eb78cdb64d09999efea953786d4a4eb5" exitCode=0 Mar 20 08:44:30 crc kubenswrapper[5136]: I0320 08:44:30.306946 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62shw" event={"ID":"21e9b60d-f307-406d-9085-fbd9d8b67cf5","Type":"ContainerDied","Data":"ea20f727b60adf5691fc1981831b3690eb78cdb64d09999efea953786d4a4eb5"} Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.316294 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerStarted","Data":"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4"} Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.645332 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.721076 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwsfv\" (UniqueName: \"kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv\") pod \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.721168 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle\") pod \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.721317 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data\") pod \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.725921 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv" (OuterVolumeSpecName: "kube-api-access-fwsfv") pod "21e9b60d-f307-406d-9085-fbd9d8b67cf5" (UID: "21e9b60d-f307-406d-9085-fbd9d8b67cf5"). InnerVolumeSpecName "kube-api-access-fwsfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.746891 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21e9b60d-f307-406d-9085-fbd9d8b67cf5" (UID: "21e9b60d-f307-406d-9085-fbd9d8b67cf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.777279 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data" (OuterVolumeSpecName: "config-data") pod "21e9b60d-f307-406d-9085-fbd9d8b67cf5" (UID: "21e9b60d-f307-406d-9085-fbd9d8b67cf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.823352 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwsfv\" (UniqueName: \"kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.823390 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.823399 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.331654 5136 generic.go:334] "Generic (PLEG): container finished" podID="37fd264e-9020-4030-9f75-946d4f31cab0" containerID="02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4" exitCode=0 Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.331760 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerDied","Data":"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4"} Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.339271 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62shw" event={"ID":"21e9b60d-f307-406d-9085-fbd9d8b67cf5","Type":"ContainerDied","Data":"549f9ebb1b138869c8af30c58ac84b76e50c7d4cdb473ff81b9c92aa5b441e01"} Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.339313 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="549f9ebb1b138869c8af30c58ac84b76e50c7d4cdb473ff81b9c92aa5b441e01" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.339356 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.579318 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:44:32 crc kubenswrapper[5136]: E0320 08:44:32.579756 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e9b60d-f307-406d-9085-fbd9d8b67cf5" containerName="keystone-db-sync" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.579781 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e9b60d-f307-406d-9085-fbd9d8b67cf5" containerName="keystone-db-sync" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.579976 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e9b60d-f307-406d-9085-fbd9d8b67cf5" containerName="keystone-db-sync" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.580997 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.600594 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.636255 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.636327 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.636359 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdjb\" (UniqueName: \"kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.636457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.636493 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.642299 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l4bpw"] Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.643346 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.647474 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.647529 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.647711 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.647804 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.652291 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bcnhn" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.662191 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l4bpw"] Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738098 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738158 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738181 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738217 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738243 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdjb\" (UniqueName: \"kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738274 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738321 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738356 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738378 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5jm\" (UniqueName: \"kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738416 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738448 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.739459 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.740191 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.740792 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.741565 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.775988 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdjb\" (UniqueName: \"kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.839903 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.839941 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.839983 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.840022 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.840049 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.840064 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx5jm\" (UniqueName: \"kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.844018 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.844182 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.844925 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.845370 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.854356 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.856050 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx5jm\" (UniqueName: \"kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.899278 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.969055 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:33 crc kubenswrapper[5136]: I0320 08:44:33.346698 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:44:33 crc kubenswrapper[5136]: I0320 08:44:33.348572 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerStarted","Data":"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7"} Mar 20 08:44:33 crc kubenswrapper[5136]: W0320 08:44:33.354131 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd5e6126_8bb0_497c_9a3a_856e96128e83.slice/crio-148add14ed98710953a69caa68c00199c9d76d0d24f031860e3e3fd6ef37946b WatchSource:0}: Error finding container 148add14ed98710953a69caa68c00199c9d76d0d24f031860e3e3fd6ef37946b: Status 404 returned error can't find the container with id 148add14ed98710953a69caa68c00199c9d76d0d24f031860e3e3fd6ef37946b Mar 20 08:44:33 crc kubenswrapper[5136]: I0320 08:44:33.369469 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2kvt9" podStartSLOduration=2.884124474 podStartE2EDuration="5.369445717s" podCreationTimestamp="2026-03-20 08:44:28 +0000 UTC" firstStartedPulling="2026-03-20 08:44:30.307831154 +0000 UTC m=+6902.567142305" lastFinishedPulling="2026-03-20 08:44:32.793152397 +0000 UTC m=+6905.052463548" observedRunningTime="2026-03-20 08:44:33.365102683 +0000 UTC m=+6905.624413834" watchObservedRunningTime="2026-03-20 08:44:33.369445717 +0000 UTC m=+6905.628756868" Mar 20 08:44:33 crc kubenswrapper[5136]: I0320 08:44:33.485107 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l4bpw"] Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.366264 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4bpw" event={"ID":"bf9fd65f-edc7-45c1-9503-1eb4386d5f38","Type":"ContainerStarted","Data":"c0135a379aa43c0ef1ad29a602be6db0384857bc32c56cdc2b2d0040cfa4649a"} Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.366588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4bpw" event={"ID":"bf9fd65f-edc7-45c1-9503-1eb4386d5f38","Type":"ContainerStarted","Data":"3239bd249ccd137b449e7ffafd6142d9ad57319034a579ce14aaaed982e8bcdb"} Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.373014 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerID="721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217" exitCode=0 Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.373071 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777959d579-j5npb" event={"ID":"bd5e6126-8bb0-497c-9a3a-856e96128e83","Type":"ContainerDied","Data":"721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217"} Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.373147 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777959d579-j5npb" event={"ID":"bd5e6126-8bb0-497c-9a3a-856e96128e83","Type":"ContainerStarted","Data":"148add14ed98710953a69caa68c00199c9d76d0d24f031860e3e3fd6ef37946b"} Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.385096 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l4bpw" podStartSLOduration=2.385052279 podStartE2EDuration="2.385052279s" podCreationTimestamp="2026-03-20 08:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:44:34.378984532 +0000 UTC m=+6906.638295683" watchObservedRunningTime="2026-03-20 08:44:34.385052279 +0000 UTC m=+6906.644363440" Mar 20 08:44:35 crc kubenswrapper[5136]: I0320 08:44:35.385899 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777959d579-j5npb" event={"ID":"bd5e6126-8bb0-497c-9a3a-856e96128e83","Type":"ContainerStarted","Data":"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288"} Mar 20 08:44:35 crc kubenswrapper[5136]: I0320 08:44:35.396564 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:44:35 crc kubenswrapper[5136]: E0320 08:44:35.396776 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:44:35 crc kubenswrapper[5136]: I0320 08:44:35.423643 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-777959d579-j5npb" podStartSLOduration=3.423625973 podStartE2EDuration="3.423625973s" podCreationTimestamp="2026-03-20 08:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:44:35.420344682 +0000 UTC m=+6907.679655833" watchObservedRunningTime="2026-03-20 08:44:35.423625973 +0000 UTC m=+6907.682937124" Mar 20 08:44:36 crc kubenswrapper[5136]: I0320 08:44:36.391642 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:38 crc kubenswrapper[5136]: I0320 08:44:38.411623 5136 generic.go:334] "Generic (PLEG): container finished" podID="bf9fd65f-edc7-45c1-9503-1eb4386d5f38" containerID="c0135a379aa43c0ef1ad29a602be6db0384857bc32c56cdc2b2d0040cfa4649a" exitCode=0 Mar 20 08:44:38 crc kubenswrapper[5136]: I0320 08:44:38.418631 5136 scope.go:117] "RemoveContainer" containerID="e29edc0f4375ac391060cb753f50bdb9915298f531076d0c17e85a24815a777f" Mar 20 08:44:38 crc kubenswrapper[5136]: I0320 08:44:38.419573 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4bpw" event={"ID":"bf9fd65f-edc7-45c1-9503-1eb4386d5f38","Type":"ContainerDied","Data":"c0135a379aa43c0ef1ad29a602be6db0384857bc32c56cdc2b2d0040cfa4649a"} Mar 20 08:44:38 crc kubenswrapper[5136]: I0320 08:44:38.478364 5136 scope.go:117] "RemoveContainer" containerID="8c3b72a05088d18b83e8fd4c523a6250996da641765be00333d607a1dfe71673" Mar 20 08:44:38 crc kubenswrapper[5136]: I0320 08:44:38.493405 5136 scope.go:117] "RemoveContainer" containerID="02dd795cb150362efe906bc099f470a71a335d9458efc922a23eb6c04569901e" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.006361 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.006649 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.753599 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.783604 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx5jm\" (UniqueName: \"kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.783713 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.783764 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.783846 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.783887 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.784035 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.789576 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts" (OuterVolumeSpecName: "scripts") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.789946 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.790621 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm" (OuterVolumeSpecName: "kube-api-access-zx5jm") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "kube-api-access-zx5jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.791297 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.807032 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.810734 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data" (OuterVolumeSpecName: "config-data") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886287 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx5jm\" (UniqueName: \"kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886323 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886333 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886343 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886352 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886359 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.052385 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kvt9" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="registry-server" probeResult="failure" output=< Mar 20 08:44:40 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:44:40 crc kubenswrapper[5136]: > Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.450024 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4bpw" event={"ID":"bf9fd65f-edc7-45c1-9503-1eb4386d5f38","Type":"ContainerDied","Data":"3239bd249ccd137b449e7ffafd6142d9ad57319034a579ce14aaaed982e8bcdb"} Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.450067 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3239bd249ccd137b449e7ffafd6142d9ad57319034a579ce14aaaed982e8bcdb" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.450252 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.506170 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l4bpw"] Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.513280 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l4bpw"] Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.603978 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-645md"] Mar 20 08:44:40 crc kubenswrapper[5136]: E0320 08:44:40.604425 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9fd65f-edc7-45c1-9503-1eb4386d5f38" containerName="keystone-bootstrap" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.604452 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9fd65f-edc7-45c1-9503-1eb4386d5f38" containerName="keystone-bootstrap" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.604686 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9fd65f-edc7-45c1-9503-1eb4386d5f38" containerName="keystone-bootstrap" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.605349 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.610370 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.610599 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bcnhn" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.610667 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.610697 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.610914 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.630853 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-645md"] Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.698700 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.699062 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.699131 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xh85\" (UniqueName: \"kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.699196 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.699287 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.699306 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800628 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800657 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xh85\" (UniqueName: \"kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800702 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800742 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800761 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.807480 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.808088 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.809139 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.809222 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.812312 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.830641 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xh85\" (UniqueName: \"kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.938282 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:41 crc kubenswrapper[5136]: I0320 08:44:41.406477 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-645md"] Mar 20 08:44:41 crc kubenswrapper[5136]: W0320 08:44:41.410876 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode023c878_7ddf_478a_9069_85d32b1d5bf9.slice/crio-cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87 WatchSource:0}: Error finding container cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87: Status 404 returned error can't find the container with id cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87 Mar 20 08:44:41 crc kubenswrapper[5136]: I0320 08:44:41.464258 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-645md" event={"ID":"e023c878-7ddf-478a-9069-85d32b1d5bf9","Type":"ContainerStarted","Data":"cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87"} Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.411027 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9fd65f-edc7-45c1-9503-1eb4386d5f38" path="/var/lib/kubelet/pods/bf9fd65f-edc7-45c1-9503-1eb4386d5f38/volumes" Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.472326 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-645md" event={"ID":"e023c878-7ddf-478a-9069-85d32b1d5bf9","Type":"ContainerStarted","Data":"7a4fb348e084d0c108a5953823b245c56381a86254bd1ec8ccef0cb8f458e61f"} Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.493259 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-645md" podStartSLOduration=2.493238616 podStartE2EDuration="2.493238616s" podCreationTimestamp="2026-03-20 08:44:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:44:42.486210268 +0000 UTC m=+6914.745521429" watchObservedRunningTime="2026-03-20 08:44:42.493238616 +0000 UTC m=+6914.752549767" Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.901139 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.962111 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.962345 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="dnsmasq-dns" containerID="cri-o://fa96e658106264d429d004b45b4bacbef4dfbc220f9941975579c5341fa77132" gracePeriod=10 Mar 20 08:44:43 crc kubenswrapper[5136]: I0320 08:44:43.482596 5136 generic.go:334] "Generic (PLEG): container finished" podID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerID="fa96e658106264d429d004b45b4bacbef4dfbc220f9941975579c5341fa77132" exitCode=0 Mar 20 08:44:43 crc kubenswrapper[5136]: I0320 08:44:43.482746 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" event={"ID":"80ba4d56-2bee-4ab9-9acd-c7588d675a4b","Type":"ContainerDied","Data":"fa96e658106264d429d004b45b4bacbef4dfbc220f9941975579c5341fa77132"} Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.176283 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.370106 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config\") pod \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.370168 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc\") pod \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.370227 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb\") pod \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.370871 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb\") pod \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.370920 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snk2j\" (UniqueName: \"kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j\") pod \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.385064 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j" (OuterVolumeSpecName: "kube-api-access-snk2j") pod "80ba4d56-2bee-4ab9-9acd-c7588d675a4b" (UID: "80ba4d56-2bee-4ab9-9acd-c7588d675a4b"). InnerVolumeSpecName "kube-api-access-snk2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.473580 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80ba4d56-2bee-4ab9-9acd-c7588d675a4b" (UID: "80ba4d56-2bee-4ab9-9acd-c7588d675a4b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.473663 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snk2j\" (UniqueName: \"kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.477186 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80ba4d56-2bee-4ab9-9acd-c7588d675a4b" (UID: "80ba4d56-2bee-4ab9-9acd-c7588d675a4b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.486314 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80ba4d56-2bee-4ab9-9acd-c7588d675a4b" (UID: "80ba4d56-2bee-4ab9-9acd-c7588d675a4b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.497437 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.502220 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config" (OuterVolumeSpecName: "config") pod "80ba4d56-2bee-4ab9-9acd-c7588d675a4b" (UID: "80ba4d56-2bee-4ab9-9acd-c7588d675a4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.566966 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" event={"ID":"80ba4d56-2bee-4ab9-9acd-c7588d675a4b","Type":"ContainerDied","Data":"167230fdb5149a52edd1822e02c35390c6ca18f21c97fbb6e377a5b71350b091"} Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.567031 5136 scope.go:117] "RemoveContainer" containerID="fa96e658106264d429d004b45b4bacbef4dfbc220f9941975579c5341fa77132" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.574920 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.574961 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.574973 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.574983 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.638418 5136 scope.go:117] "RemoveContainer" containerID="e3d5db568a0a051b325af6f8c22b6c105820123adce2d9ee29ab549861506fd4" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.830161 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.837316 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:44:45 crc kubenswrapper[5136]: I0320 08:44:45.508856 5136 generic.go:334] "Generic (PLEG): container finished" podID="e023c878-7ddf-478a-9069-85d32b1d5bf9" containerID="7a4fb348e084d0c108a5953823b245c56381a86254bd1ec8ccef0cb8f458e61f" exitCode=0 Mar 20 08:44:45 crc kubenswrapper[5136]: I0320 08:44:45.508923 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-645md" event={"ID":"e023c878-7ddf-478a-9069-85d32b1d5bf9","Type":"ContainerDied","Data":"7a4fb348e084d0c108a5953823b245c56381a86254bd1ec8ccef0cb8f458e61f"} Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.407379 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" path="/var/lib/kubelet/pods/80ba4d56-2bee-4ab9-9acd-c7588d675a4b/volumes" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.803190 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.809378 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.809485 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xh85\" (UniqueName: \"kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.809546 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.809624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.815004 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.820341 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.825127 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85" (OuterVolumeSpecName: "kube-api-access-9xh85") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "kube-api-access-9xh85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.837404 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data" (OuterVolumeSpecName: "config-data") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.911884 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.911965 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.912495 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.912518 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xh85\" (UniqueName: \"kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.912528 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.912537 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.918987 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts" (OuterVolumeSpecName: "scripts") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.934468 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.014291 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.014335 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.396506 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:44:47 crc kubenswrapper[5136]: E0320 08:44:47.396747 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.527482 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-645md" event={"ID":"e023c878-7ddf-478a-9069-85d32b1d5bf9","Type":"ContainerDied","Data":"cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87"} Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.527528 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.527593 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.633651 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 08:44:47 crc kubenswrapper[5136]: E0320 08:44:47.634055 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e023c878-7ddf-478a-9069-85d32b1d5bf9" containerName="keystone-bootstrap" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.634077 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e023c878-7ddf-478a-9069-85d32b1d5bf9" containerName="keystone-bootstrap" Mar 20 08:44:47 crc kubenswrapper[5136]: E0320 08:44:47.634104 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="dnsmasq-dns" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.634112 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="dnsmasq-dns" Mar 20 08:44:47 crc kubenswrapper[5136]: E0320 08:44:47.634131 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="init" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.634140 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="init" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.634347 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e023c878-7ddf-478a-9069-85d32b1d5bf9" containerName="keystone-bootstrap" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.634377 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="dnsmasq-dns" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.635086 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640490 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640514 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640527 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640656 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640674 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640960 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bcnhn" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.653403 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.723957 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724022 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt2qd\" (UniqueName: \"kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724060 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724238 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724272 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724304 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724344 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724377 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.825492 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.826551 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.827181 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt2qd\" (UniqueName: \"kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.827310 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.827582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.827690 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.827939 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.828150 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.828516 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.838507 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.845538 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.848850 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.849341 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.853220 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.865444 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.872832 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt2qd\" (UniqueName: \"kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.992023 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:48 crc kubenswrapper[5136]: I0320 08:44:48.410002 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 08:44:48 crc kubenswrapper[5136]: W0320 08:44:48.421577 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6492170d_c425_4bc1_8f26_b002ade2a30a.slice/crio-8f7c1605d17f9d449c5a1d9a15decff286543015c063129a09c0f97fced38720 WatchSource:0}: Error finding container 8f7c1605d17f9d449c5a1d9a15decff286543015c063129a09c0f97fced38720: Status 404 returned error can't find the container with id 8f7c1605d17f9d449c5a1d9a15decff286543015c063129a09c0f97fced38720 Mar 20 08:44:48 crc kubenswrapper[5136]: I0320 08:44:48.540682 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69dd969bf5-bw8cr" event={"ID":"6492170d-c425-4bc1-8f26-b002ade2a30a","Type":"ContainerStarted","Data":"8f7c1605d17f9d449c5a1d9a15decff286543015c063129a09c0f97fced38720"} Mar 20 08:44:49 crc kubenswrapper[5136]: I0320 08:44:49.054165 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:49 crc kubenswrapper[5136]: I0320 08:44:49.099439 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:49 crc kubenswrapper[5136]: I0320 08:44:49.297195 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:49 crc kubenswrapper[5136]: I0320 08:44:49.548161 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69dd969bf5-bw8cr" event={"ID":"6492170d-c425-4bc1-8f26-b002ade2a30a","Type":"ContainerStarted","Data":"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b"} Mar 20 08:44:49 crc kubenswrapper[5136]: I0320 08:44:49.566670 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-69dd969bf5-bw8cr" podStartSLOduration=2.5666499959999998 podStartE2EDuration="2.566649996s" podCreationTimestamp="2026-03-20 08:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:44:49.564027284 +0000 UTC m=+6921.823338445" watchObservedRunningTime="2026-03-20 08:44:49.566649996 +0000 UTC m=+6921.825961157" Mar 20 08:44:50 crc kubenswrapper[5136]: I0320 08:44:50.554148 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2kvt9" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="registry-server" containerID="cri-o://1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7" gracePeriod=2 Mar 20 08:44:50 crc kubenswrapper[5136]: I0320 08:44:50.557955 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.030525 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.180788 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content\") pod \"37fd264e-9020-4030-9f75-946d4f31cab0\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.181113 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities\") pod \"37fd264e-9020-4030-9f75-946d4f31cab0\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.181248 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps95x\" (UniqueName: \"kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x\") pod \"37fd264e-9020-4030-9f75-946d4f31cab0\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.181953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities" (OuterVolumeSpecName: "utilities") pod "37fd264e-9020-4030-9f75-946d4f31cab0" (UID: "37fd264e-9020-4030-9f75-946d4f31cab0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.186216 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x" (OuterVolumeSpecName: "kube-api-access-ps95x") pod "37fd264e-9020-4030-9f75-946d4f31cab0" (UID: "37fd264e-9020-4030-9f75-946d4f31cab0"). InnerVolumeSpecName "kube-api-access-ps95x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.284007 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.284048 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps95x\" (UniqueName: \"kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.309365 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37fd264e-9020-4030-9f75-946d4f31cab0" (UID: "37fd264e-9020-4030-9f75-946d4f31cab0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.385323 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.566068 5136 generic.go:334] "Generic (PLEG): container finished" podID="37fd264e-9020-4030-9f75-946d4f31cab0" containerID="1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7" exitCode=0 Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.566114 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerDied","Data":"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7"} Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.566156 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.566175 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerDied","Data":"3fe0fe28b244286c6b840a87150082973c5021674f51e6c65a29f0d0dfa6ccf7"} Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.566199 5136 scope.go:117] "RemoveContainer" containerID="1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.584392 5136 scope.go:117] "RemoveContainer" containerID="02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.617978 5136 scope.go:117] "RemoveContainer" containerID="d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.631166 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.639265 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.673001 5136 scope.go:117] "RemoveContainer" containerID="1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7" Mar 20 08:44:51 crc kubenswrapper[5136]: E0320 08:44:51.682971 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7\": container with ID starting with 1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7 not found: ID does not exist" containerID="1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.683019 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7"} err="failed to get container status \"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7\": rpc error: code = NotFound desc = could not find container \"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7\": container with ID starting with 1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7 not found: ID does not exist" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.683044 5136 scope.go:117] "RemoveContainer" containerID="02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4" Mar 20 08:44:51 crc kubenswrapper[5136]: E0320 08:44:51.686912 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4\": container with ID starting with 02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4 not found: ID does not exist" containerID="02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.686941 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4"} err="failed to get container status \"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4\": rpc error: code = NotFound desc = could not find container \"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4\": container with ID starting with 02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4 not found: ID does not exist" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.686961 5136 scope.go:117] "RemoveContainer" containerID="d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d" Mar 20 08:44:51 crc kubenswrapper[5136]: E0320 08:44:51.689014 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d\": container with ID starting with d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d not found: ID does not exist" containerID="d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.689067 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d"} err="failed to get container status \"d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d\": rpc error: code = NotFound desc = could not find container \"d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d\": container with ID starting with d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d not found: ID does not exist" Mar 20 08:44:52 crc kubenswrapper[5136]: I0320 08:44:52.411542 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" path="/var/lib/kubelet/pods/37fd264e-9020-4030-9f75-946d4f31cab0/volumes" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.134023 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g"] Mar 20 08:45:00 crc kubenswrapper[5136]: E0320 08:45:00.134631 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="registry-server" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.134645 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="registry-server" Mar 20 08:45:00 crc kubenswrapper[5136]: E0320 08:45:00.134675 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="extract-utilities" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.134682 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="extract-utilities" Mar 20 08:45:00 crc kubenswrapper[5136]: E0320 08:45:00.134704 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="extract-content" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.134713 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="extract-content" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.134940 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="registry-server" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.135603 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.137687 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.141839 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.145319 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g"] Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.280037 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.280253 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.280418 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9bpb\" (UniqueName: \"kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.381693 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.381776 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9bpb\" (UniqueName: \"kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.381911 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.383019 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.388619 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.397128 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:45:00 crc kubenswrapper[5136]: E0320 08:45:00.397670 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.402877 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9bpb\" (UniqueName: \"kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.496343 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.998225 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g"] Mar 20 08:45:01 crc kubenswrapper[5136]: I0320 08:45:01.662370 5136 generic.go:334] "Generic (PLEG): container finished" podID="3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" containerID="d06383ead678a648b0b20646d0d0c9fe0235389efbb6ca7e052c4c75cf3a52ff" exitCode=0 Mar 20 08:45:01 crc kubenswrapper[5136]: I0320 08:45:01.662563 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" event={"ID":"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc","Type":"ContainerDied","Data":"d06383ead678a648b0b20646d0d0c9fe0235389efbb6ca7e052c4c75cf3a52ff"} Mar 20 08:45:01 crc kubenswrapper[5136]: I0320 08:45:01.662780 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" event={"ID":"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc","Type":"ContainerStarted","Data":"56ef9583bd6d5b02088561c903571ef36752c229c798b0b23fe1cf15c6181eb2"} Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.013642 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.125609 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9bpb\" (UniqueName: \"kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb\") pod \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.126155 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume\") pod \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.126352 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume\") pod \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.126756 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" (UID: "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.130971 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" (UID: "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.131347 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb" (OuterVolumeSpecName: "kube-api-access-t9bpb") pod "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" (UID: "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc"). InnerVolumeSpecName "kube-api-access-t9bpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.227970 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.228008 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.228020 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9bpb\" (UniqueName: \"kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb\") on node \"crc\" DevicePath \"\"" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.678558 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" event={"ID":"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc","Type":"ContainerDied","Data":"56ef9583bd6d5b02088561c903571ef36752c229c798b0b23fe1cf15c6181eb2"} Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.678603 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56ef9583bd6d5b02088561c903571ef36752c229c798b0b23fe1cf15c6181eb2" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.678671 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:04 crc kubenswrapper[5136]: I0320 08:45:04.083926 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl"] Mar 20 08:45:04 crc kubenswrapper[5136]: I0320 08:45:04.097796 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl"] Mar 20 08:45:04 crc kubenswrapper[5136]: I0320 08:45:04.416672 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ad33e9-cb6b-450c-9703-8d6e379f3075" path="/var/lib/kubelet/pods/90ad33e9-cb6b-450c-9703-8d6e379f3075/volumes" Mar 20 08:45:14 crc kubenswrapper[5136]: I0320 08:45:14.396750 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:45:14 crc kubenswrapper[5136]: E0320 08:45:14.397522 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:45:19 crc kubenswrapper[5136]: I0320 08:45:19.555747 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.435055 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 08:45:21 crc kubenswrapper[5136]: E0320 08:45:21.435485 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" containerName="collect-profiles" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.435498 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" containerName="collect-profiles" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.435664 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" containerName="collect-profiles" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.436250 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.438928 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-g94cv" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.439215 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.439729 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.461325 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.564480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.564522 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8ff7\" (UniqueName: \"kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.564542 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.564681 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.667766 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.667829 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8ff7\" (UniqueName: \"kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.667855 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.667893 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.669070 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.675249 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.675441 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.686693 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8ff7\" (UniqueName: \"kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.761522 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:45:22 crc kubenswrapper[5136]: I0320 08:45:22.192982 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:45:22 crc kubenswrapper[5136]: I0320 08:45:22.838289 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f874f73-4453-44c8-b1d9-52559489bead","Type":"ContainerStarted","Data":"f5c09e60aafc3bfb4497c7d4524c1440cbf4ea1f7cf2061ba6a49655e1671665"} Mar 20 08:45:26 crc kubenswrapper[5136]: I0320 08:45:26.397021 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:45:26 crc kubenswrapper[5136]: E0320 08:45:26.397618 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:45:33 crc kubenswrapper[5136]: I0320 08:45:33.932987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f874f73-4453-44c8-b1d9-52559489bead","Type":"ContainerStarted","Data":"9bbc0f5018299d8801809b126e8536554b83592717e71efd04c53bc88080264e"} Mar 20 08:45:33 crc kubenswrapper[5136]: I0320 08:45:33.959204 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.943973584 podStartE2EDuration="12.959186258s" podCreationTimestamp="2026-03-20 08:45:21 +0000 UTC" firstStartedPulling="2026-03-20 08:45:22.202676542 +0000 UTC m=+6954.461987693" lastFinishedPulling="2026-03-20 08:45:33.217889216 +0000 UTC m=+6965.477200367" observedRunningTime="2026-03-20 08:45:33.95025351 +0000 UTC m=+6966.209564661" watchObservedRunningTime="2026-03-20 08:45:33.959186258 +0000 UTC m=+6966.218497409" Mar 20 08:45:38 crc kubenswrapper[5136]: I0320 08:45:38.402621 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:45:38 crc kubenswrapper[5136]: E0320 08:45:38.403641 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:45:38 crc kubenswrapper[5136]: I0320 08:45:38.558624 5136 scope.go:117] "RemoveContainer" containerID="a9399ede282cd1d4b161abddeaa1193070be8003a67d2c8907749c2c5dadab78" Mar 20 08:45:53 crc kubenswrapper[5136]: I0320 08:45:53.397442 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:45:53 crc kubenswrapper[5136]: E0320 08:45:53.398258 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.151637 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566606-rdf48"] Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.154369 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.157070 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.157147 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.157234 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.164355 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566606-rdf48"] Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.198043 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wgwk\" (UniqueName: \"kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk\") pod \"auto-csr-approver-29566606-rdf48\" (UID: \"895f2400-9932-4967-831f-f047de8c0f63\") " pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.300282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wgwk\" (UniqueName: \"kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk\") pod \"auto-csr-approver-29566606-rdf48\" (UID: \"895f2400-9932-4967-831f-f047de8c0f63\") " pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.318737 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wgwk\" (UniqueName: \"kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk\") pod \"auto-csr-approver-29566606-rdf48\" (UID: \"895f2400-9932-4967-831f-f047de8c0f63\") " pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.481807 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.938661 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566606-rdf48"] Mar 20 08:46:01 crc kubenswrapper[5136]: I0320 08:46:01.195840 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566606-rdf48" event={"ID":"895f2400-9932-4967-831f-f047de8c0f63","Type":"ContainerStarted","Data":"25ccbedf7f52d511dbfa63676fd266d9b896c3045f6514a33e16f8bec1197edd"} Mar 20 08:46:03 crc kubenswrapper[5136]: I0320 08:46:03.212923 5136 generic.go:334] "Generic (PLEG): container finished" podID="895f2400-9932-4967-831f-f047de8c0f63" containerID="ee7fc0aa7d70c450967fddf706c56fe4af54a2ede94af9ae1aa1f75f2c772efc" exitCode=0 Mar 20 08:46:03 crc kubenswrapper[5136]: I0320 08:46:03.213020 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566606-rdf48" event={"ID":"895f2400-9932-4967-831f-f047de8c0f63","Type":"ContainerDied","Data":"ee7fc0aa7d70c450967fddf706c56fe4af54a2ede94af9ae1aa1f75f2c772efc"} Mar 20 08:46:04 crc kubenswrapper[5136]: I0320 08:46:04.530474 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:04 crc kubenswrapper[5136]: I0320 08:46:04.675785 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wgwk\" (UniqueName: \"kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk\") pod \"895f2400-9932-4967-831f-f047de8c0f63\" (UID: \"895f2400-9932-4967-831f-f047de8c0f63\") " Mar 20 08:46:04 crc kubenswrapper[5136]: I0320 08:46:04.686831 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk" (OuterVolumeSpecName: "kube-api-access-7wgwk") pod "895f2400-9932-4967-831f-f047de8c0f63" (UID: "895f2400-9932-4967-831f-f047de8c0f63"). InnerVolumeSpecName "kube-api-access-7wgwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:46:04 crc kubenswrapper[5136]: I0320 08:46:04.777654 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wgwk\" (UniqueName: \"kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk\") on node \"crc\" DevicePath \"\"" Mar 20 08:46:05 crc kubenswrapper[5136]: I0320 08:46:05.226910 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566606-rdf48" event={"ID":"895f2400-9932-4967-831f-f047de8c0f63","Type":"ContainerDied","Data":"25ccbedf7f52d511dbfa63676fd266d9b896c3045f6514a33e16f8bec1197edd"} Mar 20 08:46:05 crc kubenswrapper[5136]: I0320 08:46:05.226947 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ccbedf7f52d511dbfa63676fd266d9b896c3045f6514a33e16f8bec1197edd" Mar 20 08:46:05 crc kubenswrapper[5136]: I0320 08:46:05.226990 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:05 crc kubenswrapper[5136]: I0320 08:46:05.606533 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566600-2m6nn"] Mar 20 08:46:05 crc kubenswrapper[5136]: I0320 08:46:05.613413 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566600-2m6nn"] Mar 20 08:46:06 crc kubenswrapper[5136]: I0320 08:46:06.398086 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:46:06 crc kubenswrapper[5136]: E0320 08:46:06.399364 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:06 crc kubenswrapper[5136]: I0320 08:46:06.408585 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3480cf66-9f91-4ce8-924c-0f730044c0de" path="/var/lib/kubelet/pods/3480cf66-9f91-4ce8-924c-0f730044c0de/volumes" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.379879 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:07 crc kubenswrapper[5136]: E0320 08:46:07.380293 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895f2400-9932-4967-831f-f047de8c0f63" containerName="oc" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.380309 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="895f2400-9932-4967-831f-f047de8c0f63" containerName="oc" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.380553 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="895f2400-9932-4967-831f-f047de8c0f63" containerName="oc" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.381988 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.385546 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.530724 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwwnd\" (UniqueName: \"kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.530800 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.531020 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.632252 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.632314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.632427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwwnd\" (UniqueName: \"kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.632958 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.633114 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.656331 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwwnd\" (UniqueName: \"kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.702847 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:08 crc kubenswrapper[5136]: I0320 08:46:08.164880 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:08 crc kubenswrapper[5136]: I0320 08:46:08.253706 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerStarted","Data":"439cdaa884cb19e124eb2cc76843dcdec0c452cad325408ad2001b7577772cf2"} Mar 20 08:46:09 crc kubenswrapper[5136]: I0320 08:46:09.263208 5136 generic.go:334] "Generic (PLEG): container finished" podID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerID="c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88" exitCode=0 Mar 20 08:46:09 crc kubenswrapper[5136]: I0320 08:46:09.263327 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerDied","Data":"c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88"} Mar 20 08:46:10 crc kubenswrapper[5136]: I0320 08:46:10.273357 5136 generic.go:334] "Generic (PLEG): container finished" podID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerID="e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82" exitCode=0 Mar 20 08:46:10 crc kubenswrapper[5136]: I0320 08:46:10.273471 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerDied","Data":"e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82"} Mar 20 08:46:11 crc kubenswrapper[5136]: I0320 08:46:11.283671 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerStarted","Data":"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50"} Mar 20 08:46:11 crc kubenswrapper[5136]: I0320 08:46:11.305848 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nkp9h" podStartSLOduration=2.87280569 podStartE2EDuration="4.305827174s" podCreationTimestamp="2026-03-20 08:46:07 +0000 UTC" firstStartedPulling="2026-03-20 08:46:09.26495414 +0000 UTC m=+7001.524265291" lastFinishedPulling="2026-03-20 08:46:10.697975624 +0000 UTC m=+7002.957286775" observedRunningTime="2026-03-20 08:46:11.30440473 +0000 UTC m=+7003.563715901" watchObservedRunningTime="2026-03-20 08:46:11.305827174 +0000 UTC m=+7003.565138325" Mar 20 08:46:17 crc kubenswrapper[5136]: I0320 08:46:17.703767 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:17 crc kubenswrapper[5136]: I0320 08:46:17.704334 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:17 crc kubenswrapper[5136]: I0320 08:46:17.747733 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:18 crc kubenswrapper[5136]: I0320 08:46:18.381809 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:18 crc kubenswrapper[5136]: I0320 08:46:18.426190 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:19 crc kubenswrapper[5136]: I0320 08:46:19.396868 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:46:19 crc kubenswrapper[5136]: E0320 08:46:19.397454 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.349692 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nkp9h" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="registry-server" containerID="cri-o://b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50" gracePeriod=2 Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.826510 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.978600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwwnd\" (UniqueName: \"kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd\") pod \"53d1a23a-d4a8-45ec-969b-627514c8be8f\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.978784 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities\") pod \"53d1a23a-d4a8-45ec-969b-627514c8be8f\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.978877 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content\") pod \"53d1a23a-d4a8-45ec-969b-627514c8be8f\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.979963 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities" (OuterVolumeSpecName: "utilities") pod "53d1a23a-d4a8-45ec-969b-627514c8be8f" (UID: "53d1a23a-d4a8-45ec-969b-627514c8be8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.984916 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd" (OuterVolumeSpecName: "kube-api-access-vwwnd") pod "53d1a23a-d4a8-45ec-969b-627514c8be8f" (UID: "53d1a23a-d4a8-45ec-969b-627514c8be8f"). InnerVolumeSpecName "kube-api-access-vwwnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.007462 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53d1a23a-d4a8-45ec-969b-627514c8be8f" (UID: "53d1a23a-d4a8-45ec-969b-627514c8be8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.081183 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.081216 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.081228 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwwnd\" (UniqueName: \"kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd\") on node \"crc\" DevicePath \"\"" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.357924 5136 generic.go:334] "Generic (PLEG): container finished" podID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerID="b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50" exitCode=0 Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.357974 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerDied","Data":"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50"} Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.358003 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerDied","Data":"439cdaa884cb19e124eb2cc76843dcdec0c452cad325408ad2001b7577772cf2"} Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.358048 5136 scope.go:117] "RemoveContainer" containerID="b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.358196 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.377201 5136 scope.go:117] "RemoveContainer" containerID="e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.397846 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.406983 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.421213 5136 scope.go:117] "RemoveContainer" containerID="c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.442329 5136 scope.go:117] "RemoveContainer" containerID="b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50" Mar 20 08:46:21 crc kubenswrapper[5136]: E0320 08:46:21.442775 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50\": container with ID starting with b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50 not found: ID does not exist" containerID="b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.442830 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50"} err="failed to get container status \"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50\": rpc error: code = NotFound desc = could not find container \"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50\": container with ID starting with b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50 not found: ID does not exist" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.442857 5136 scope.go:117] "RemoveContainer" containerID="e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82" Mar 20 08:46:21 crc kubenswrapper[5136]: E0320 08:46:21.443199 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82\": container with ID starting with e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82 not found: ID does not exist" containerID="e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.443233 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82"} err="failed to get container status \"e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82\": rpc error: code = NotFound desc = could not find container \"e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82\": container with ID starting with e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82 not found: ID does not exist" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.443247 5136 scope.go:117] "RemoveContainer" containerID="c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88" Mar 20 08:46:21 crc kubenswrapper[5136]: E0320 08:46:21.443503 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88\": container with ID starting with c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88 not found: ID does not exist" containerID="c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.443525 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88"} err="failed to get container status \"c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88\": rpc error: code = NotFound desc = could not find container \"c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88\": container with ID starting with c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88 not found: ID does not exist" Mar 20 08:46:22 crc kubenswrapper[5136]: I0320 08:46:22.405863 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" path="/var/lib/kubelet/pods/53d1a23a-d4a8-45ec-969b-627514c8be8f/volumes" Mar 20 08:46:33 crc kubenswrapper[5136]: I0320 08:46:33.398252 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:46:33 crc kubenswrapper[5136]: E0320 08:46:33.399558 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:38 crc kubenswrapper[5136]: I0320 08:46:38.628217 5136 scope.go:117] "RemoveContainer" containerID="ada9ce7b7b306f2b5dbbf312318f5ac5adc2a593ce372df15119878b742a8edb" Mar 20 08:46:47 crc kubenswrapper[5136]: I0320 08:46:47.396991 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:46:47 crc kubenswrapper[5136]: E0320 08:46:47.397805 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.340302 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-gqtht"] Mar 20 08:46:58 crc kubenswrapper[5136]: E0320 08:46:58.341111 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="extract-content" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.341124 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="extract-content" Mar 20 08:46:58 crc kubenswrapper[5136]: E0320 08:46:58.341145 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="extract-utilities" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.341151 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="extract-utilities" Mar 20 08:46:58 crc kubenswrapper[5136]: E0320 08:46:58.341163 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="registry-server" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.341169 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="registry-server" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.341308 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="registry-server" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.341877 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.351110 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gqtht"] Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.385918 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.386030 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk79r\" (UniqueName: \"kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.447448 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-24c6-account-create-update-625nw"] Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.448906 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.451489 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.459015 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-24c6-account-create-update-625nw"] Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.487091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.487151 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.487180 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj2lc\" (UniqueName: \"kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.487285 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk79r\" (UniqueName: \"kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.488002 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.505604 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk79r\" (UniqueName: \"kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.588999 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.589076 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj2lc\" (UniqueName: \"kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.590044 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.605440 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj2lc\" (UniqueName: \"kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.673622 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.768392 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.087563 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gqtht"] Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.212522 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-24c6-account-create-update-625nw"] Mar 20 08:46:59 crc kubenswrapper[5136]: W0320 08:46:59.215699 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06c8bf45_d717_45f4_9679_7f6b69835f8a.slice/crio-171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd WatchSource:0}: Error finding container 171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd: Status 404 returned error can't find the container with id 171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.396172 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:46:59 crc kubenswrapper[5136]: E0320 08:46:59.396623 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.653295 5136 generic.go:334] "Generic (PLEG): container finished" podID="a4aab638-4f7d-46a0-bc82-10fe569b56db" containerID="87d4064c210f2c8ecf2546f67dd8fe9ef436d4f291209d0fa6a7f5ba97b6e5e4" exitCode=0 Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.653347 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gqtht" event={"ID":"a4aab638-4f7d-46a0-bc82-10fe569b56db","Type":"ContainerDied","Data":"87d4064c210f2c8ecf2546f67dd8fe9ef436d4f291209d0fa6a7f5ba97b6e5e4"} Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.653396 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gqtht" event={"ID":"a4aab638-4f7d-46a0-bc82-10fe569b56db","Type":"ContainerStarted","Data":"ec07ab9166ac551f71d621d181e619f44a7113103b0c42e99304c37347fd9055"} Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.654978 5136 generic.go:334] "Generic (PLEG): container finished" podID="06c8bf45-d717-45f4-9679-7f6b69835f8a" containerID="4b1f554c7f496a2460aeebf430477f38851975d2eafc2fb7735f082f6ef9d928" exitCode=0 Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.655022 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24c6-account-create-update-625nw" event={"ID":"06c8bf45-d717-45f4-9679-7f6b69835f8a","Type":"ContainerDied","Data":"4b1f554c7f496a2460aeebf430477f38851975d2eafc2fb7735f082f6ef9d928"} Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.655061 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24c6-account-create-update-625nw" event={"ID":"06c8bf45-d717-45f4-9679-7f6b69835f8a","Type":"ContainerStarted","Data":"171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd"} Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.118645 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.126239 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gqtht" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.235746 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts\") pod \"06c8bf45-d717-45f4-9679-7f6b69835f8a\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.235843 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts\") pod \"a4aab638-4f7d-46a0-bc82-10fe569b56db\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.235876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj2lc\" (UniqueName: \"kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc\") pod \"06c8bf45-d717-45f4-9679-7f6b69835f8a\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.236433 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4aab638-4f7d-46a0-bc82-10fe569b56db" (UID: "a4aab638-4f7d-46a0-bc82-10fe569b56db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.236479 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06c8bf45-d717-45f4-9679-7f6b69835f8a" (UID: "06c8bf45-d717-45f4-9679-7f6b69835f8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.236768 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk79r\" (UniqueName: \"kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r\") pod \"a4aab638-4f7d-46a0-bc82-10fe569b56db\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.237381 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.237416 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.241303 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc" (OuterVolumeSpecName: "kube-api-access-lj2lc") pod "06c8bf45-d717-45f4-9679-7f6b69835f8a" (UID: "06c8bf45-d717-45f4-9679-7f6b69835f8a"). InnerVolumeSpecName "kube-api-access-lj2lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.241494 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r" (OuterVolumeSpecName: "kube-api-access-lk79r") pod "a4aab638-4f7d-46a0-bc82-10fe569b56db" (UID: "a4aab638-4f7d-46a0-bc82-10fe569b56db"). InnerVolumeSpecName "kube-api-access-lk79r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.339278 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj2lc\" (UniqueName: \"kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.339634 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk79r\" (UniqueName: \"kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.678787 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gqtht" event={"ID":"a4aab638-4f7d-46a0-bc82-10fe569b56db","Type":"ContainerDied","Data":"ec07ab9166ac551f71d621d181e619f44a7113103b0c42e99304c37347fd9055"} Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.679330 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec07ab9166ac551f71d621d181e619f44a7113103b0c42e99304c37347fd9055" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.678867 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gqtht" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.682034 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24c6-account-create-update-625nw" event={"ID":"06c8bf45-d717-45f4-9679-7f6b69835f8a","Type":"ContainerDied","Data":"171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd"} Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.682079 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.682143 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.751857 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-grfwk"] Mar 20 08:47:03 crc kubenswrapper[5136]: E0320 08:47:03.752539 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c8bf45-d717-45f4-9679-7f6b69835f8a" containerName="mariadb-account-create-update" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.752555 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c8bf45-d717-45f4-9679-7f6b69835f8a" containerName="mariadb-account-create-update" Mar 20 08:47:03 crc kubenswrapper[5136]: E0320 08:47:03.752577 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4aab638-4f7d-46a0-bc82-10fe569b56db" containerName="mariadb-database-create" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.752587 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aab638-4f7d-46a0-bc82-10fe569b56db" containerName="mariadb-database-create" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.752855 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4aab638-4f7d-46a0-bc82-10fe569b56db" containerName="mariadb-database-create" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.752882 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c8bf45-d717-45f4-9679-7f6b69835f8a" containerName="mariadb-account-create-update" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.753547 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.756291 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-v6fw7" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.756541 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.756863 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.762352 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-grfwk"] Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.891381 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.891537 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drgz2\" (UniqueName: \"kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.891589 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.992291 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.992549 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drgz2\" (UniqueName: \"kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.992666 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.999119 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.001360 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.017762 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drgz2\" (UniqueName: \"kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.081646 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.528678 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-grfwk"] Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.707342 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grfwk" event={"ID":"4c6db9e6-4059-4911-b008-680848fffdbe","Type":"ContainerStarted","Data":"340051113d29f7efbc695a906b0061f19d9714ea49c2643f84b952194ea4de4c"} Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.707662 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grfwk" event={"ID":"4c6db9e6-4059-4911-b008-680848fffdbe","Type":"ContainerStarted","Data":"71a2b577b4f8974b06ad9600916dc0bf420b74f2ea8b24f702012664dbd73634"} Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.725204 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-grfwk" podStartSLOduration=1.725184265 podStartE2EDuration="1.725184265s" podCreationTimestamp="2026-03-20 08:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:47:04.721621694 +0000 UTC m=+7056.980932845" watchObservedRunningTime="2026-03-20 08:47:04.725184265 +0000 UTC m=+7056.984495416" Mar 20 08:47:09 crc kubenswrapper[5136]: I0320 08:47:09.751427 5136 generic.go:334] "Generic (PLEG): container finished" podID="4c6db9e6-4059-4911-b008-680848fffdbe" containerID="340051113d29f7efbc695a906b0061f19d9714ea49c2643f84b952194ea4de4c" exitCode=0 Mar 20 08:47:09 crc kubenswrapper[5136]: I0320 08:47:09.751536 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grfwk" event={"ID":"4c6db9e6-4059-4911-b008-680848fffdbe","Type":"ContainerDied","Data":"340051113d29f7efbc695a906b0061f19d9714ea49c2643f84b952194ea4de4c"} Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.111419 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.116108 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drgz2\" (UniqueName: \"kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2\") pod \"4c6db9e6-4059-4911-b008-680848fffdbe\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.116232 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config\") pod \"4c6db9e6-4059-4911-b008-680848fffdbe\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.116363 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle\") pod \"4c6db9e6-4059-4911-b008-680848fffdbe\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.121743 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2" (OuterVolumeSpecName: "kube-api-access-drgz2") pod "4c6db9e6-4059-4911-b008-680848fffdbe" (UID: "4c6db9e6-4059-4911-b008-680848fffdbe"). InnerVolumeSpecName "kube-api-access-drgz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.147495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config" (OuterVolumeSpecName: "config") pod "4c6db9e6-4059-4911-b008-680848fffdbe" (UID: "4c6db9e6-4059-4911-b008-680848fffdbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.147786 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c6db9e6-4059-4911-b008-680848fffdbe" (UID: "4c6db9e6-4059-4911-b008-680848fffdbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.217373 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.217405 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drgz2\" (UniqueName: \"kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.217417 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.773255 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grfwk" event={"ID":"4c6db9e6-4059-4911-b008-680848fffdbe","Type":"ContainerDied","Data":"71a2b577b4f8974b06ad9600916dc0bf420b74f2ea8b24f702012664dbd73634"} Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.773308 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a2b577b4f8974b06ad9600916dc0bf420b74f2ea8b24f702012664dbd73634" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.773332 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.937055 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:47:11 crc kubenswrapper[5136]: E0320 08:47:11.937588 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6db9e6-4059-4911-b008-680848fffdbe" containerName="neutron-db-sync" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.937603 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6db9e6-4059-4911-b008-680848fffdbe" containerName="neutron-db-sync" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.937828 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6db9e6-4059-4911-b008-680848fffdbe" containerName="neutron-db-sync" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.938662 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.951438 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.022227 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.023620 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.027807 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.027956 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.028147 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-v6fw7" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.028363 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032551 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032607 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dxjq\" (UniqueName: \"kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj86x\" (UniqueName: \"kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032683 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032710 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032785 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032824 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032882 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032940 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032991 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.043173 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133614 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133698 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133769 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133795 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133854 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133907 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133953 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.134005 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.134028 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dxjq\" (UniqueName: \"kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.134186 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj86x\" (UniqueName: \"kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.134714 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.135114 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.135249 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.135355 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.143303 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.143503 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.144388 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.148679 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.151489 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj86x\" (UniqueName: \"kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.156691 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dxjq\" (UniqueName: \"kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.249765 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.251423 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.254417 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.267426 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.337540 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.338657 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.338734 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scxxp\" (UniqueName: \"kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.338779 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.439979 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.440366 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.440411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scxxp\" (UniqueName: \"kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.441173 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.441424 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.459479 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scxxp\" (UniqueName: \"kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.577270 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.752678 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.792011 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" event={"ID":"4a740a83-3e08-402b-9e5b-6c8a62a80435","Type":"ContainerStarted","Data":"676c8c5cdb1dfc8268622e52dd2300c796e32f8a976b6c157621d92d03db62f0"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.042518 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.070143 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:13 crc kubenswrapper[5136]: W0320 08:47:13.134930 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05900948_fec4_4c61_846c_648b8e5cf6b2.slice/crio-4213271971af3275b36a135c5b91b5e7835dfce4ad7bd980eb7f80a6cc2afa83 WatchSource:0}: Error finding container 4213271971af3275b36a135c5b91b5e7835dfce4ad7bd980eb7f80a6cc2afa83: Status 404 returned error can't find the container with id 4213271971af3275b36a135c5b91b5e7835dfce4ad7bd980eb7f80a6cc2afa83 Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.801571 5136 generic.go:334] "Generic (PLEG): container finished" podID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerID="8f73d6f969a6dcfad143a7dea5aee18ef87be55ad10ac352de6c2af3efe4415d" exitCode=0 Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.801834 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" event={"ID":"4a740a83-3e08-402b-9e5b-6c8a62a80435","Type":"ContainerDied","Data":"8f73d6f969a6dcfad143a7dea5aee18ef87be55ad10ac352de6c2af3efe4415d"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.804071 5136 generic.go:334] "Generic (PLEG): container finished" podID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerID="41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501" exitCode=0 Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.804644 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerDied","Data":"41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.804681 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerStarted","Data":"4213271971af3275b36a135c5b91b5e7835dfce4ad7bd980eb7f80a6cc2afa83"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.806360 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerStarted","Data":"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.806390 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerStarted","Data":"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.806404 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerStarted","Data":"33d00c289e95766dad520f73944beacae4014f8a82a254a5b586260bbef5a1d4"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.806531 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.869461 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86b9496f44-69p9k" podStartSLOduration=1.869446019 podStartE2EDuration="1.869446019s" podCreationTimestamp="2026-03-20 08:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:47:13.852381119 +0000 UTC m=+7066.111692270" watchObservedRunningTime="2026-03-20 08:47:13.869446019 +0000 UTC m=+7066.128757170" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.308545 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.310314 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.312232 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.319177 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.325206 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.396924 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:47:14 crc kubenswrapper[5136]: E0320 08:47:14.397237 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.472307 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4qs\" (UniqueName: \"kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.472346 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.472372 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.472423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.473114 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.473967 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.474080 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575434 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575509 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4qs\" (UniqueName: \"kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575528 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575546 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575673 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575848 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575914 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.581036 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.581325 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.589532 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.590564 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.591906 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.593124 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.598775 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4qs\" (UniqueName: \"kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.630062 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.820087 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" event={"ID":"4a740a83-3e08-402b-9e5b-6c8a62a80435","Type":"ContainerStarted","Data":"85a4536c0d633a089bcb60d3330e0a39a977db11277f10ac374ebe343ed461c2"} Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.820414 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.822308 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerStarted","Data":"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f"} Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.845222 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" podStartSLOduration=3.8452012030000002 podStartE2EDuration="3.845201203s" podCreationTimestamp="2026-03-20 08:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:47:14.838425794 +0000 UTC m=+7067.097736945" watchObservedRunningTime="2026-03-20 08:47:14.845201203 +0000 UTC m=+7067.104512354" Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.166383 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 08:47:15 crc kubenswrapper[5136]: W0320 08:47:15.167063 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305f3f22_2f38_44c5_8e63_1f028edce331.slice/crio-98850495a9f03337c375a82dd9c14c9acf7e4cb2584c596cc8791bfade8a3bf0 WatchSource:0}: Error finding container 98850495a9f03337c375a82dd9c14c9acf7e4cb2584c596cc8791bfade8a3bf0: Status 404 returned error can't find the container with id 98850495a9f03337c375a82dd9c14c9acf7e4cb2584c596cc8791bfade8a3bf0 Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.850034 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerStarted","Data":"b54ae1c896c24440630a7756d526255f3def96dbed5cb096fc4d77997e706367"} Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.850296 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerStarted","Data":"293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a"} Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.850311 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.850320 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerStarted","Data":"98850495a9f03337c375a82dd9c14c9acf7e4cb2584c596cc8791bfade8a3bf0"} Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.853180 5136 generic.go:334] "Generic (PLEG): container finished" podID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerID="4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f" exitCode=0 Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.853314 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerDied","Data":"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f"} Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.867354 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b494fbb57-cd7nw" podStartSLOduration=1.867338088 podStartE2EDuration="1.867338088s" podCreationTimestamp="2026-03-20 08:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:47:15.865244183 +0000 UTC m=+7068.124555334" watchObservedRunningTime="2026-03-20 08:47:15.867338088 +0000 UTC m=+7068.126649239" Mar 20 08:47:16 crc kubenswrapper[5136]: I0320 08:47:16.862081 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerStarted","Data":"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305"} Mar 20 08:47:16 crc kubenswrapper[5136]: I0320 08:47:16.885973 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qhpzm" podStartSLOduration=2.33199341 podStartE2EDuration="4.885956703s" podCreationTimestamp="2026-03-20 08:47:12 +0000 UTC" firstStartedPulling="2026-03-20 08:47:13.805647019 +0000 UTC m=+7066.064958170" lastFinishedPulling="2026-03-20 08:47:16.359610302 +0000 UTC m=+7068.618921463" observedRunningTime="2026-03-20 08:47:16.877944734 +0000 UTC m=+7069.137255885" watchObservedRunningTime="2026-03-20 08:47:16.885956703 +0000 UTC m=+7069.145267854" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.255962 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.337306 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.337561 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-777959d579-j5npb" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="dnsmasq-dns" containerID="cri-o://f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288" gracePeriod=10 Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.578452 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.578857 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.648504 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.824436 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.918224 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerID="f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288" exitCode=0 Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.918281 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.918336 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777959d579-j5npb" event={"ID":"bd5e6126-8bb0-497c-9a3a-856e96128e83","Type":"ContainerDied","Data":"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288"} Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.918365 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777959d579-j5npb" event={"ID":"bd5e6126-8bb0-497c-9a3a-856e96128e83","Type":"ContainerDied","Data":"148add14ed98710953a69caa68c00199c9d76d0d24f031860e3e3fd6ef37946b"} Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.918387 5136 scope.go:117] "RemoveContainer" containerID="f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.928136 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb\") pod \"bd5e6126-8bb0-497c-9a3a-856e96128e83\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.928397 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb\") pod \"bd5e6126-8bb0-497c-9a3a-856e96128e83\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.928451 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzdjb\" (UniqueName: \"kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb\") pod \"bd5e6126-8bb0-497c-9a3a-856e96128e83\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.928483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config\") pod \"bd5e6126-8bb0-497c-9a3a-856e96128e83\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.928590 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc\") pod \"bd5e6126-8bb0-497c-9a3a-856e96128e83\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.938229 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb" (OuterVolumeSpecName: "kube-api-access-hzdjb") pod "bd5e6126-8bb0-497c-9a3a-856e96128e83" (UID: "bd5e6126-8bb0-497c-9a3a-856e96128e83"). InnerVolumeSpecName "kube-api-access-hzdjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.940868 5136 scope.go:117] "RemoveContainer" containerID="721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.964372 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.976894 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config" (OuterVolumeSpecName: "config") pod "bd5e6126-8bb0-497c-9a3a-856e96128e83" (UID: "bd5e6126-8bb0-497c-9a3a-856e96128e83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.987781 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd5e6126-8bb0-497c-9a3a-856e96128e83" (UID: "bd5e6126-8bb0-497c-9a3a-856e96128e83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.990259 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd5e6126-8bb0-497c-9a3a-856e96128e83" (UID: "bd5e6126-8bb0-497c-9a3a-856e96128e83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.992079 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd5e6126-8bb0-497c-9a3a-856e96128e83" (UID: "bd5e6126-8bb0-497c-9a3a-856e96128e83"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.009955 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.030709 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.030735 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.030766 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.030777 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzdjb\" (UniqueName: \"kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.030786 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.040401 5136 scope.go:117] "RemoveContainer" containerID="f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288" Mar 20 08:47:23 crc kubenswrapper[5136]: E0320 08:47:23.040916 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288\": container with ID starting with f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288 not found: ID does not exist" containerID="f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.040948 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288"} err="failed to get container status \"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288\": rpc error: code = NotFound desc = could not find container \"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288\": container with ID starting with f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288 not found: ID does not exist" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.040967 5136 scope.go:117] "RemoveContainer" containerID="721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217" Mar 20 08:47:23 crc kubenswrapper[5136]: E0320 08:47:23.042515 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217\": container with ID starting with 721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217 not found: ID does not exist" containerID="721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.042559 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217"} err="failed to get container status \"721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217\": rpc error: code = NotFound desc = could not find container \"721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217\": container with ID starting with 721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217 not found: ID does not exist" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.247475 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.255193 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:47:24 crc kubenswrapper[5136]: I0320 08:47:24.407372 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" path="/var/lib/kubelet/pods/bd5e6126-8bb0-497c-9a3a-856e96128e83/volumes" Mar 20 08:47:24 crc kubenswrapper[5136]: I0320 08:47:24.934315 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qhpzm" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="registry-server" containerID="cri-o://4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305" gracePeriod=2 Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.379733 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.396994 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:47:25 crc kubenswrapper[5136]: E0320 08:47:25.397394 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.470163 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities\") pod \"05900948-fec4-4c61-846c-648b8e5cf6b2\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.470241 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content\") pod \"05900948-fec4-4c61-846c-648b8e5cf6b2\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.470342 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scxxp\" (UniqueName: \"kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp\") pod \"05900948-fec4-4c61-846c-648b8e5cf6b2\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.471953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities" (OuterVolumeSpecName: "utilities") pod "05900948-fec4-4c61-846c-648b8e5cf6b2" (UID: "05900948-fec4-4c61-846c-648b8e5cf6b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.475043 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp" (OuterVolumeSpecName: "kube-api-access-scxxp") pod "05900948-fec4-4c61-846c-648b8e5cf6b2" (UID: "05900948-fec4-4c61-846c-648b8e5cf6b2"). InnerVolumeSpecName "kube-api-access-scxxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.572651 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.572682 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scxxp\" (UniqueName: \"kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.601308 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05900948-fec4-4c61-846c-648b8e5cf6b2" (UID: "05900948-fec4-4c61-846c-648b8e5cf6b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.674347 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.944414 5136 generic.go:334] "Generic (PLEG): container finished" podID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerID="4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305" exitCode=0 Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.944500 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerDied","Data":"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305"} Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.944557 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerDied","Data":"4213271971af3275b36a135c5b91b5e7835dfce4ad7bd980eb7f80a6cc2afa83"} Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.944588 5136 scope.go:117] "RemoveContainer" containerID="4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.944582 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.963192 5136 scope.go:117] "RemoveContainer" containerID="4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.979715 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.026519 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.027280 5136 scope.go:117] "RemoveContainer" containerID="41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.066185 5136 scope.go:117] "RemoveContainer" containerID="4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305" Mar 20 08:47:26 crc kubenswrapper[5136]: E0320 08:47:26.066594 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305\": container with ID starting with 4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305 not found: ID does not exist" containerID="4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.066638 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305"} err="failed to get container status \"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305\": rpc error: code = NotFound desc = could not find container \"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305\": container with ID starting with 4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305 not found: ID does not exist" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.066664 5136 scope.go:117] "RemoveContainer" containerID="4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f" Mar 20 08:47:26 crc kubenswrapper[5136]: E0320 08:47:26.067103 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f\": container with ID starting with 4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f not found: ID does not exist" containerID="4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.067130 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f"} err="failed to get container status \"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f\": rpc error: code = NotFound desc = could not find container \"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f\": container with ID starting with 4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f not found: ID does not exist" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.067146 5136 scope.go:117] "RemoveContainer" containerID="41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501" Mar 20 08:47:26 crc kubenswrapper[5136]: E0320 08:47:26.067535 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501\": container with ID starting with 41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501 not found: ID does not exist" containerID="41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.067557 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501"} err="failed to get container status \"41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501\": rpc error: code = NotFound desc = could not find container \"41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501\": container with ID starting with 41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501 not found: ID does not exist" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.406734 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" path="/var/lib/kubelet/pods/05900948-fec4-4c61-846c-648b8e5cf6b2/volumes" Mar 20 08:47:39 crc kubenswrapper[5136]: I0320 08:47:39.396515 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:47:39 crc kubenswrapper[5136]: E0320 08:47:39.397329 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:47:42 crc kubenswrapper[5136]: I0320 08:47:42.345667 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:44 crc kubenswrapper[5136]: I0320 08:47:44.640626 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:44 crc kubenswrapper[5136]: I0320 08:47:44.710397 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:44 crc kubenswrapper[5136]: I0320 08:47:44.710647 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86b9496f44-69p9k" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-api" containerID="cri-o://be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795" gracePeriod=30 Mar 20 08:47:44 crc kubenswrapper[5136]: I0320 08:47:44.710803 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86b9496f44-69p9k" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-httpd" containerID="cri-o://6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947" gracePeriod=30 Mar 20 08:47:45 crc kubenswrapper[5136]: I0320 08:47:45.122330 5136 generic.go:334] "Generic (PLEG): container finished" podID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerID="6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947" exitCode=0 Mar 20 08:47:45 crc kubenswrapper[5136]: I0320 08:47:45.122428 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerDied","Data":"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947"} Mar 20 08:47:48 crc kubenswrapper[5136]: I0320 08:47:48.850958 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.049606 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle\") pod \"6470043c-e2e0-4acd-8c90-5f38ffca2924\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.049660 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config\") pod \"6470043c-e2e0-4acd-8c90-5f38ffca2924\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.049743 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config\") pod \"6470043c-e2e0-4acd-8c90-5f38ffca2924\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.049879 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs\") pod \"6470043c-e2e0-4acd-8c90-5f38ffca2924\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.049953 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dxjq\" (UniqueName: \"kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq\") pod \"6470043c-e2e0-4acd-8c90-5f38ffca2924\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.067770 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6470043c-e2e0-4acd-8c90-5f38ffca2924" (UID: "6470043c-e2e0-4acd-8c90-5f38ffca2924"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.067792 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq" (OuterVolumeSpecName: "kube-api-access-8dxjq") pod "6470043c-e2e0-4acd-8c90-5f38ffca2924" (UID: "6470043c-e2e0-4acd-8c90-5f38ffca2924"). InnerVolumeSpecName "kube-api-access-8dxjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.091683 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config" (OuterVolumeSpecName: "config") pod "6470043c-e2e0-4acd-8c90-5f38ffca2924" (UID: "6470043c-e2e0-4acd-8c90-5f38ffca2924"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.103795 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6470043c-e2e0-4acd-8c90-5f38ffca2924" (UID: "6470043c-e2e0-4acd-8c90-5f38ffca2924"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.140302 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6470043c-e2e0-4acd-8c90-5f38ffca2924" (UID: "6470043c-e2e0-4acd-8c90-5f38ffca2924"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.151664 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.151694 5136 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.151707 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dxjq\" (UniqueName: \"kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.151718 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.151727 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.163107 5136 generic.go:334] "Generic (PLEG): container finished" podID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerID="be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795" exitCode=0 Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.163185 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerDied","Data":"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795"} Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.163238 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerDied","Data":"33d00c289e95766dad520f73944beacae4014f8a82a254a5b586260bbef5a1d4"} Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.163272 5136 scope.go:117] "RemoveContainer" containerID="6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.163544 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.196067 5136 scope.go:117] "RemoveContainer" containerID="be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.202226 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.209696 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.218677 5136 scope.go:117] "RemoveContainer" containerID="6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947" Mar 20 08:47:49 crc kubenswrapper[5136]: E0320 08:47:49.219202 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947\": container with ID starting with 6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947 not found: ID does not exist" containerID="6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.219313 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947"} err="failed to get container status \"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947\": rpc error: code = NotFound desc = could not find container \"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947\": container with ID starting with 6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947 not found: ID does not exist" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.219445 5136 scope.go:117] "RemoveContainer" containerID="be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795" Mar 20 08:47:49 crc kubenswrapper[5136]: E0320 08:47:49.220122 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795\": container with ID starting with be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795 not found: ID does not exist" containerID="be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.220156 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795"} err="failed to get container status \"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795\": rpc error: code = NotFound desc = could not find container \"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795\": container with ID starting with be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795 not found: ID does not exist" Mar 20 08:47:50 crc kubenswrapper[5136]: I0320 08:47:50.408268 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" path="/var/lib/kubelet/pods/6470043c-e2e0-4acd-8c90-5f38ffca2924/volumes" Mar 20 08:47:53 crc kubenswrapper[5136]: I0320 08:47:53.396661 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:47:53 crc kubenswrapper[5136]: E0320 08:47:53.397130 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.142725 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566608-tdwn4"] Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143417 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="extract-utilities" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143434 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="extract-utilities" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143452 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="init" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143462 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="init" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143494 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-api" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143502 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-api" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143526 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-httpd" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143534 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-httpd" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143549 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="extract-content" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143557 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="extract-content" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143570 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="dnsmasq-dns" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143577 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="dnsmasq-dns" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143589 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="registry-server" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143597 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="registry-server" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143792 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-httpd" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143826 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="dnsmasq-dns" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143836 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="registry-server" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143853 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-api" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.144545 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.147587 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.147757 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.148089 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.150028 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566608-tdwn4"] Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.239122 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mdc\" (UniqueName: \"kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc\") pod \"auto-csr-approver-29566608-tdwn4\" (UID: \"ef82e0a5-a043-48d9-82d6-132dbf0e9b74\") " pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.340664 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mdc\" (UniqueName: \"kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc\") pod \"auto-csr-approver-29566608-tdwn4\" (UID: \"ef82e0a5-a043-48d9-82d6-132dbf0e9b74\") " pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.377031 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mdc\" (UniqueName: \"kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc\") pod \"auto-csr-approver-29566608-tdwn4\" (UID: \"ef82e0a5-a043-48d9-82d6-132dbf0e9b74\") " pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.479893 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.957123 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566608-tdwn4"] Mar 20 08:48:01 crc kubenswrapper[5136]: I0320 08:48:01.265126 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" event={"ID":"ef82e0a5-a043-48d9-82d6-132dbf0e9b74","Type":"ContainerStarted","Data":"46b0c2fb5f21a668a1cd3e8ba21fde944d41173fed7c941c91de2c944a6a6f73"} Mar 20 08:48:02 crc kubenswrapper[5136]: I0320 08:48:02.278026 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" event={"ID":"ef82e0a5-a043-48d9-82d6-132dbf0e9b74","Type":"ContainerStarted","Data":"fa17c24ddb45b337fff5f936348cd486ca94ec7240798c49bc135753e4d62ff4"} Mar 20 08:48:02 crc kubenswrapper[5136]: I0320 08:48:02.295619 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" podStartSLOduration=1.264273977 podStartE2EDuration="2.295602167s" podCreationTimestamp="2026-03-20 08:48:00 +0000 UTC" firstStartedPulling="2026-03-20 08:48:00.968931013 +0000 UTC m=+7113.228242164" lastFinishedPulling="2026-03-20 08:48:02.000259183 +0000 UTC m=+7114.259570354" observedRunningTime="2026-03-20 08:48:02.292337475 +0000 UTC m=+7114.551648626" watchObservedRunningTime="2026-03-20 08:48:02.295602167 +0000 UTC m=+7114.554913318" Mar 20 08:48:03 crc kubenswrapper[5136]: I0320 08:48:03.298502 5136 generic.go:334] "Generic (PLEG): container finished" podID="ef82e0a5-a043-48d9-82d6-132dbf0e9b74" containerID="fa17c24ddb45b337fff5f936348cd486ca94ec7240798c49bc135753e4d62ff4" exitCode=0 Mar 20 08:48:03 crc kubenswrapper[5136]: I0320 08:48:03.298548 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" event={"ID":"ef82e0a5-a043-48d9-82d6-132dbf0e9b74","Type":"ContainerDied","Data":"fa17c24ddb45b337fff5f936348cd486ca94ec7240798c49bc135753e4d62ff4"} Mar 20 08:48:04 crc kubenswrapper[5136]: I0320 08:48:04.633518 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:04 crc kubenswrapper[5136]: I0320 08:48:04.725166 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9mdc\" (UniqueName: \"kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc\") pod \"ef82e0a5-a043-48d9-82d6-132dbf0e9b74\" (UID: \"ef82e0a5-a043-48d9-82d6-132dbf0e9b74\") " Mar 20 08:48:04 crc kubenswrapper[5136]: I0320 08:48:04.733243 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc" (OuterVolumeSpecName: "kube-api-access-d9mdc") pod "ef82e0a5-a043-48d9-82d6-132dbf0e9b74" (UID: "ef82e0a5-a043-48d9-82d6-132dbf0e9b74"). InnerVolumeSpecName "kube-api-access-d9mdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:04 crc kubenswrapper[5136]: I0320 08:48:04.826381 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9mdc\" (UniqueName: \"kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:05 crc kubenswrapper[5136]: I0320 08:48:05.318588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" event={"ID":"ef82e0a5-a043-48d9-82d6-132dbf0e9b74","Type":"ContainerDied","Data":"46b0c2fb5f21a668a1cd3e8ba21fde944d41173fed7c941c91de2c944a6a6f73"} Mar 20 08:48:05 crc kubenswrapper[5136]: I0320 08:48:05.318626 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46b0c2fb5f21a668a1cd3e8ba21fde944d41173fed7c941c91de2c944a6a6f73" Mar 20 08:48:05 crc kubenswrapper[5136]: I0320 08:48:05.318714 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:05 crc kubenswrapper[5136]: I0320 08:48:05.371657 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566602-fmfs9"] Mar 20 08:48:05 crc kubenswrapper[5136]: I0320 08:48:05.379129 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566602-fmfs9"] Mar 20 08:48:06 crc kubenswrapper[5136]: I0320 08:48:06.410436 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e36980-52e2-4a59-9374-b2f1150fcb20" path="/var/lib/kubelet/pods/78e36980-52e2-4a59-9374-b2f1150fcb20/volumes" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.406649 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:48:08 crc kubenswrapper[5136]: E0320 08:48:08.406957 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.956963 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-lxzxf"] Mar 20 08:48:08 crc kubenswrapper[5136]: E0320 08:48:08.957270 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef82e0a5-a043-48d9-82d6-132dbf0e9b74" containerName="oc" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.957286 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef82e0a5-a043-48d9-82d6-132dbf0e9b74" containerName="oc" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.957447 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef82e0a5-a043-48d9-82d6-132dbf0e9b74" containerName="oc" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.958009 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.961303 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-crpvv" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.961430 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.961508 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.961778 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.961948 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002356 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dbhf\" (UniqueName: \"kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002425 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002454 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002768 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002855 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002900 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002964 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.007552 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lxzxf"] Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.038378 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.039600 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.063378 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104321 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104380 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104467 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104507 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104530 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104548 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104569 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104596 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104635 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7db\" (UniqueName: \"kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104657 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dbhf\" (UniqueName: \"kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104678 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104807 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.105295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.105711 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.124755 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.125587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.126255 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.128581 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dbhf\" (UniqueName: \"kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.207165 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.207230 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7db\" (UniqueName: \"kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.207261 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.207290 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.207356 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.208299 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.208792 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.209528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.210167 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.229184 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7db\" (UniqueName: \"kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.290037 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.364777 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.764216 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lxzxf"] Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.854280 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:48:10 crc kubenswrapper[5136]: I0320 08:48:10.388666 5136 generic.go:334] "Generic (PLEG): container finished" podID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerID="3fea7d5c06ec7715b7fc0e66f5644f5c5e237f2a08acb713c5d77dc706e25822" exitCode=0 Mar 20 08:48:10 crc kubenswrapper[5136]: I0320 08:48:10.388979 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" event={"ID":"cad94756-feb3-42e4-8c87-b0cfb638edba","Type":"ContainerDied","Data":"3fea7d5c06ec7715b7fc0e66f5644f5c5e237f2a08acb713c5d77dc706e25822"} Mar 20 08:48:10 crc kubenswrapper[5136]: I0320 08:48:10.389004 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" event={"ID":"cad94756-feb3-42e4-8c87-b0cfb638edba","Type":"ContainerStarted","Data":"8ec3eac30ebcf6dd9046afae810790acb77c285633e8d4470487949417fb3311"} Mar 20 08:48:10 crc kubenswrapper[5136]: I0320 08:48:10.422609 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lxzxf" event={"ID":"1513f332-b5c6-40ca-9c3a-4ef7b1f78672","Type":"ContainerStarted","Data":"f85c3bbff6d70b386d31f6a53eb27b133a80a3ef6788031c00b163ca73516c9a"} Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.437693 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" event={"ID":"cad94756-feb3-42e4-8c87-b0cfb638edba","Type":"ContainerStarted","Data":"fde2ab1fdb1e8fbccab29a1af1b697570f2345e5369a2aaab8edc51ff2fc185d"} Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.438876 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.468592 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" podStartSLOduration=2.468573263 podStartE2EDuration="2.468573263s" podCreationTimestamp="2026-03-20 08:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:48:11.460150851 +0000 UTC m=+7123.719462002" watchObservedRunningTime="2026-03-20 08:48:11.468573263 +0000 UTC m=+7123.727884414" Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.947556 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.953886 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.956534 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.961297 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068118 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068174 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068271 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w85w\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068294 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068330 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068359 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170452 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170505 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170605 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w85w\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170633 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170681 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170717 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.171222 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.172527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.178075 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.193579 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w85w\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.194528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.195053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.281646 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.279004 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.280512 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.282132 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.282750 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.337052 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.421480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.421896 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.421944 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rjh6\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.421967 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.422025 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.422059 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.422131 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.422160 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.527837 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.527975 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528008 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528056 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528099 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528148 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rjh6\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528233 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.530910 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.541645 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.554995 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.559108 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.559944 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.562117 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.571716 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.585840 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rjh6\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.725142 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.022893 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.376239 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 08:48:14 crc kubenswrapper[5136]: W0320 08:48:14.383589 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0007e89c_1f52_4ac8_beed_59d6db6e60fd.slice/crio-d98661f6d09e3f930d057fd7583b14591e52595630d9c487ff1f51b1c2eb81d0 WatchSource:0}: Error finding container d98661f6d09e3f930d057fd7583b14591e52595630d9c487ff1f51b1c2eb81d0: Status 404 returned error can't find the container with id d98661f6d09e3f930d057fd7583b14591e52595630d9c487ff1f51b1c2eb81d0 Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.504793 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerStarted","Data":"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5"} Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.504876 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerStarted","Data":"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4"} Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.504893 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerStarted","Data":"eb93409020e8530694a12115b4c16399fd5361531132f289b0cd4e21bb24892f"} Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.504926 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.504944 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.510701 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lxzxf" event={"ID":"1513f332-b5c6-40ca-9c3a-4ef7b1f78672","Type":"ContainerStarted","Data":"bbdf8dabf0d2951b09ae3f63cdd6eda3f6af581fbac68d093607e16820b73e60"} Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.514183 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerStarted","Data":"d98661f6d09e3f930d057fd7583b14591e52595630d9c487ff1f51b1c2eb81d0"} Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.540104 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-64657f9cbd-qbdxx" podStartSLOduration=3.5400825940000002 podStartE2EDuration="3.540082594s" podCreationTimestamp="2026-03-20 08:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:48:14.529431743 +0000 UTC m=+7126.788742904" watchObservedRunningTime="2026-03-20 08:48:14.540082594 +0000 UTC m=+7126.799393745" Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.553700 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-lxzxf" podStartSLOduration=3.063422883 podStartE2EDuration="6.553681446s" podCreationTimestamp="2026-03-20 08:48:08 +0000 UTC" firstStartedPulling="2026-03-20 08:48:09.769112523 +0000 UTC m=+7122.028423674" lastFinishedPulling="2026-03-20 08:48:13.259371086 +0000 UTC m=+7125.518682237" observedRunningTime="2026-03-20 08:48:14.552374815 +0000 UTC m=+7126.811685986" watchObservedRunningTime="2026-03-20 08:48:14.553681446 +0000 UTC m=+7126.812992597" Mar 20 08:48:15 crc kubenswrapper[5136]: I0320 08:48:15.548913 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerStarted","Data":"9a6fe348ea134460d09531b2378ad3abce82d81a5457e369dbee025701fbe318"} Mar 20 08:48:15 crc kubenswrapper[5136]: I0320 08:48:15.549347 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:15 crc kubenswrapper[5136]: I0320 08:48:15.549380 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:15 crc kubenswrapper[5136]: I0320 08:48:15.549390 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerStarted","Data":"5e5a77d7952567153e8f93b101532ad12cc95f1597c77efe34c080e974b22447"} Mar 20 08:48:15 crc kubenswrapper[5136]: I0320 08:48:15.578476 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-965f7d5f6-cshp2" podStartSLOduration=2.5784521419999997 podStartE2EDuration="2.578452142s" podCreationTimestamp="2026-03-20 08:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:48:15.576313055 +0000 UTC m=+7127.835624226" watchObservedRunningTime="2026-03-20 08:48:15.578452142 +0000 UTC m=+7127.837763293" Mar 20 08:48:17 crc kubenswrapper[5136]: I0320 08:48:17.564987 5136 generic.go:334] "Generic (PLEG): container finished" podID="1513f332-b5c6-40ca-9c3a-4ef7b1f78672" containerID="bbdf8dabf0d2951b09ae3f63cdd6eda3f6af581fbac68d093607e16820b73e60" exitCode=0 Mar 20 08:48:17 crc kubenswrapper[5136]: I0320 08:48:17.565052 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lxzxf" event={"ID":"1513f332-b5c6-40ca-9c3a-4ef7b1f78672","Type":"ContainerDied","Data":"bbdf8dabf0d2951b09ae3f63cdd6eda3f6af581fbac68d093607e16820b73e60"} Mar 20 08:48:18 crc kubenswrapper[5136]: I0320 08:48:18.914118 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.018703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.018763 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.018794 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.018853 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dbhf\" (UniqueName: \"kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.018961 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.019004 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.019051 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.019497 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.020315 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.030168 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf" (OuterVolumeSpecName: "kube-api-access-4dbhf") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "kube-api-access-4dbhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.033571 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.045905 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.050189 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.053194 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts" (OuterVolumeSpecName: "scripts") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121154 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121197 5136 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121211 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121222 5136 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121235 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121245 5136 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121257 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dbhf\" (UniqueName: \"kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.367003 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.396986 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.431481 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.431742 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="dnsmasq-dns" containerID="cri-o://85a4536c0d633a089bcb60d3330e0a39a977db11277f10ac374ebe343ed461c2" gracePeriod=10 Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.585419 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lxzxf" event={"ID":"1513f332-b5c6-40ca-9c3a-4ef7b1f78672","Type":"ContainerDied","Data":"f85c3bbff6d70b386d31f6a53eb27b133a80a3ef6788031c00b163ca73516c9a"} Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.585460 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f85c3bbff6d70b386d31f6a53eb27b133a80a3ef6788031c00b163ca73516c9a" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.585534 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.592730 5136 generic.go:334] "Generic (PLEG): container finished" podID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerID="85a4536c0d633a089bcb60d3330e0a39a977db11277f10ac374ebe343ed461c2" exitCode=0 Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.592779 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" event={"ID":"4a740a83-3e08-402b-9e5b-6c8a62a80435","Type":"ContainerDied","Data":"85a4536c0d633a089bcb60d3330e0a39a977db11277f10ac374ebe343ed461c2"} Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.013187 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.148588 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj86x\" (UniqueName: \"kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x\") pod \"4a740a83-3e08-402b-9e5b-6c8a62a80435\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.148978 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc\") pod \"4a740a83-3e08-402b-9e5b-6c8a62a80435\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.149038 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config\") pod \"4a740a83-3e08-402b-9e5b-6c8a62a80435\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.149122 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb\") pod \"4a740a83-3e08-402b-9e5b-6c8a62a80435\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.149188 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb\") pod \"4a740a83-3e08-402b-9e5b-6c8a62a80435\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.156916 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x" (OuterVolumeSpecName: "kube-api-access-nj86x") pod "4a740a83-3e08-402b-9e5b-6c8a62a80435" (UID: "4a740a83-3e08-402b-9e5b-6c8a62a80435"). InnerVolumeSpecName "kube-api-access-nj86x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.197172 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a740a83-3e08-402b-9e5b-6c8a62a80435" (UID: "4a740a83-3e08-402b-9e5b-6c8a62a80435"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.203044 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a740a83-3e08-402b-9e5b-6c8a62a80435" (UID: "4a740a83-3e08-402b-9e5b-6c8a62a80435"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.203999 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config" (OuterVolumeSpecName: "config") pod "4a740a83-3e08-402b-9e5b-6c8a62a80435" (UID: "4a740a83-3e08-402b-9e5b-6c8a62a80435"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.231492 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a740a83-3e08-402b-9e5b-6c8a62a80435" (UID: "4a740a83-3e08-402b-9e5b-6c8a62a80435"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.250542 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.250569 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.250581 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj86x\" (UniqueName: \"kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.250590 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.250598 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.602569 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62"} Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.605908 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" event={"ID":"4a740a83-3e08-402b-9e5b-6c8a62a80435","Type":"ContainerDied","Data":"676c8c5cdb1dfc8268622e52dd2300c796e32f8a976b6c157621d92d03db62f0"} Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.605974 5136 scope.go:117] "RemoveContainer" containerID="85a4536c0d633a089bcb60d3330e0a39a977db11277f10ac374ebe343ed461c2" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.606004 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.630148 5136 scope.go:117] "RemoveContainer" containerID="8f73d6f969a6dcfad143a7dea5aee18ef87be55ad10ac352de6c2af3efe4415d" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.664597 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.671535 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:48:22 crc kubenswrapper[5136]: I0320 08:48:22.284717 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:22 crc kubenswrapper[5136]: I0320 08:48:22.285090 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:22 crc kubenswrapper[5136]: I0320 08:48:22.408348 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" path="/var/lib/kubelet/pods/4a740a83-3e08-402b-9e5b-6c8a62a80435/volumes" Mar 20 08:48:23 crc kubenswrapper[5136]: I0320 08:48:23.730010 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:23 crc kubenswrapper[5136]: I0320 08:48:23.730386 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:23 crc kubenswrapper[5136]: I0320 08:48:23.813284 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:23 crc kubenswrapper[5136]: I0320 08:48:23.815670 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-64657f9cbd-qbdxx" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-httpd" containerID="cri-o://d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" gracePeriod=30 Mar 20 08:48:23 crc kubenswrapper[5136]: I0320 08:48:23.816123 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-64657f9cbd-qbdxx" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-server" containerID="cri-o://90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" gracePeriod=30 Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.508667 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.628676 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.628806 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w85w\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.628954 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.628981 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.629015 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.629051 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.629862 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.629902 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.636691 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.652259 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w" (OuterVolumeSpecName: "kube-api-access-9w85w") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "kube-api-access-9w85w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.678914 5136 generic.go:334] "Generic (PLEG): container finished" podID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerID="90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" exitCode=0 Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.678961 5136 generic.go:334] "Generic (PLEG): container finished" podID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerID="d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" exitCode=0 Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.680053 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.680480 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerDied","Data":"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5"} Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.680511 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerDied","Data":"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4"} Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.680526 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerDied","Data":"eb93409020e8530694a12115b4c16399fd5361531132f289b0cd4e21bb24892f"} Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.680542 5136 scope.go:117] "RemoveContainer" containerID="90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.683645 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data" (OuterVolumeSpecName: "config-data") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.687989 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731727 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731778 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731788 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731798 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731809 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731835 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w85w\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.760943 5136 scope.go:117] "RemoveContainer" containerID="d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.826186 5136 scope.go:117] "RemoveContainer" containerID="90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" Mar 20 08:48:24 crc kubenswrapper[5136]: E0320 08:48:24.827585 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5\": container with ID starting with 90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5 not found: ID does not exist" containerID="90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.827635 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5"} err="failed to get container status \"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5\": rpc error: code = NotFound desc = could not find container \"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5\": container with ID starting with 90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5 not found: ID does not exist" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.827666 5136 scope.go:117] "RemoveContainer" containerID="d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" Mar 20 08:48:24 crc kubenswrapper[5136]: E0320 08:48:24.828407 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4\": container with ID starting with d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4 not found: ID does not exist" containerID="d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.828466 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4"} err="failed to get container status \"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4\": rpc error: code = NotFound desc = could not find container \"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4\": container with ID starting with d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4 not found: ID does not exist" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.828509 5136 scope.go:117] "RemoveContainer" containerID="90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.832357 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5"} err="failed to get container status \"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5\": rpc error: code = NotFound desc = could not find container \"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5\": container with ID starting with 90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5 not found: ID does not exist" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.832440 5136 scope.go:117] "RemoveContainer" containerID="d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.832837 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4"} err="failed to get container status \"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4\": rpc error: code = NotFound desc = could not find container \"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4\": container with ID starting with d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4 not found: ID does not exist" Mar 20 08:48:25 crc kubenswrapper[5136]: I0320 08:48:25.021872 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:25 crc kubenswrapper[5136]: I0320 08:48:25.031076 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:26 crc kubenswrapper[5136]: I0320 08:48:26.407952 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" path="/var/lib/kubelet/pods/5550afcf-085f-4f88-b901-dc9b4cf9fb7e/volumes" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.143264 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-vgnkq"] Mar 20 08:48:30 crc kubenswrapper[5136]: E0320 08:48:30.144156 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1513f332-b5c6-40ca-9c3a-4ef7b1f78672" containerName="swift-ring-rebalance" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144168 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1513f332-b5c6-40ca-9c3a-4ef7b1f78672" containerName="swift-ring-rebalance" Mar 20 08:48:30 crc kubenswrapper[5136]: E0320 08:48:30.144182 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="init" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144189 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="init" Mar 20 08:48:30 crc kubenswrapper[5136]: E0320 08:48:30.144199 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="dnsmasq-dns" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144206 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="dnsmasq-dns" Mar 20 08:48:30 crc kubenswrapper[5136]: E0320 08:48:30.144218 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-server" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144224 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-server" Mar 20 08:48:30 crc kubenswrapper[5136]: E0320 08:48:30.144240 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-httpd" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144246 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-httpd" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144420 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-server" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144432 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="dnsmasq-dns" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144444 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-httpd" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144453 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1513f332-b5c6-40ca-9c3a-4ef7b1f78672" containerName="swift-ring-rebalance" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.145986 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.151395 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b8c9-account-create-update-8v2dt"] Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.153347 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.157288 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.170493 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vgnkq"] Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.181272 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b8c9-account-create-update-8v2dt"] Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.225489 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.225558 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxz4\" (UniqueName: \"kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.225891 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jgk\" (UniqueName: \"kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.226056 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.327484 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79jgk\" (UniqueName: \"kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.327546 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.327585 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.327624 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsxz4\" (UniqueName: \"kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.328427 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.328465 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.346628 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79jgk\" (UniqueName: \"kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.346922 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsxz4\" (UniqueName: \"kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.472159 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.483678 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.927923 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vgnkq"] Mar 20 08:48:30 crc kubenswrapper[5136]: W0320 08:48:30.936137 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63d0704b_80fd_44fe_9007_2971cc8a6cf6.slice/crio-b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830 WatchSource:0}: Error finding container b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830: Status 404 returned error can't find the container with id b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830 Mar 20 08:48:30 crc kubenswrapper[5136]: W0320 08:48:30.974167 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0863275_620b_4bea_a747_135c323ebb6f.slice/crio-20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1 WatchSource:0}: Error finding container 20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1: Status 404 returned error can't find the container with id 20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1 Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.974750 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b8c9-account-create-update-8v2dt"] Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.735918 5136 generic.go:334] "Generic (PLEG): container finished" podID="63d0704b-80fd-44fe-9007-2971cc8a6cf6" containerID="ba829091226a089834672fdb8aaa0264ffcab6218d4874fe20d15ed41e821de5" exitCode=0 Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.735978 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vgnkq" event={"ID":"63d0704b-80fd-44fe-9007-2971cc8a6cf6","Type":"ContainerDied","Data":"ba829091226a089834672fdb8aaa0264ffcab6218d4874fe20d15ed41e821de5"} Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.736003 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vgnkq" event={"ID":"63d0704b-80fd-44fe-9007-2971cc8a6cf6","Type":"ContainerStarted","Data":"b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830"} Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.739054 5136 generic.go:334] "Generic (PLEG): container finished" podID="f0863275-620b-4bea-a747-135c323ebb6f" containerID="6d85db0ede2cb37b721e22824a2dda96a152a59cfb86afea2b68c0eedbe79e58" exitCode=0 Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.739090 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b8c9-account-create-update-8v2dt" event={"ID":"f0863275-620b-4bea-a747-135c323ebb6f","Type":"ContainerDied","Data":"6d85db0ede2cb37b721e22824a2dda96a152a59cfb86afea2b68c0eedbe79e58"} Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.739112 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b8c9-account-create-update-8v2dt" event={"ID":"f0863275-620b-4bea-a747-135c323ebb6f","Type":"ContainerStarted","Data":"20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1"} Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.161629 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.167576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.292268 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts\") pod \"f0863275-620b-4bea-a747-135c323ebb6f\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.292459 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts\") pod \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.292518 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79jgk\" (UniqueName: \"kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk\") pod \"f0863275-620b-4bea-a747-135c323ebb6f\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.292555 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsxz4\" (UniqueName: \"kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4\") pod \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.293063 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0863275-620b-4bea-a747-135c323ebb6f" (UID: "f0863275-620b-4bea-a747-135c323ebb6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.293171 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63d0704b-80fd-44fe-9007-2971cc8a6cf6" (UID: "63d0704b-80fd-44fe-9007-2971cc8a6cf6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.298348 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk" (OuterVolumeSpecName: "kube-api-access-79jgk") pod "f0863275-620b-4bea-a747-135c323ebb6f" (UID: "f0863275-620b-4bea-a747-135c323ebb6f"). InnerVolumeSpecName "kube-api-access-79jgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.298390 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4" (OuterVolumeSpecName: "kube-api-access-nsxz4") pod "63d0704b-80fd-44fe-9007-2971cc8a6cf6" (UID: "63d0704b-80fd-44fe-9007-2971cc8a6cf6"). InnerVolumeSpecName "kube-api-access-nsxz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.394864 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.394896 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79jgk\" (UniqueName: \"kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.394915 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsxz4\" (UniqueName: \"kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.394934 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.763925 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vgnkq" event={"ID":"63d0704b-80fd-44fe-9007-2971cc8a6cf6","Type":"ContainerDied","Data":"b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830"} Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.763971 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.764001 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.766057 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b8c9-account-create-update-8v2dt" event={"ID":"f0863275-620b-4bea-a747-135c323ebb6f","Type":"ContainerDied","Data":"20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1"} Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.766103 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.766114 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.385863 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-mwt5p"] Mar 20 08:48:35 crc kubenswrapper[5136]: E0320 08:48:35.386190 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d0704b-80fd-44fe-9007-2971cc8a6cf6" containerName="mariadb-database-create" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.386202 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d0704b-80fd-44fe-9007-2971cc8a6cf6" containerName="mariadb-database-create" Mar 20 08:48:35 crc kubenswrapper[5136]: E0320 08:48:35.386221 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0863275-620b-4bea-a747-135c323ebb6f" containerName="mariadb-account-create-update" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.386228 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0863275-620b-4bea-a747-135c323ebb6f" containerName="mariadb-account-create-update" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.386379 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0863275-620b-4bea-a747-135c323ebb6f" containerName="mariadb-account-create-update" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.386405 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d0704b-80fd-44fe-9007-2971cc8a6cf6" containerName="mariadb-database-create" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.386923 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.388752 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.389381 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bjz62" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.392454 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.426024 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mwt5p"] Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.429914 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.429987 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.430048 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.430085 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.430118 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.430251 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88hl6\" (UniqueName: \"kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.531775 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.531886 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.531909 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.531954 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.531981 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.532007 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.532103 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88hl6\" (UniqueName: \"kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.536489 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.537626 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.538421 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.539710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.551394 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88hl6\" (UniqueName: \"kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.725366 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:36 crc kubenswrapper[5136]: I0320 08:48:36.162141 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mwt5p"] Mar 20 08:48:36 crc kubenswrapper[5136]: W0320 08:48:36.163986 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod695202be_4633_411e_9afe_fd706e1cfbe6.slice/crio-2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0 WatchSource:0}: Error finding container 2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0: Status 404 returned error can't find the container with id 2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0 Mar 20 08:48:36 crc kubenswrapper[5136]: I0320 08:48:36.791701 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwt5p" event={"ID":"695202be-4633-411e-9afe-fd706e1cfbe6","Type":"ContainerStarted","Data":"2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0"} Mar 20 08:48:38 crc kubenswrapper[5136]: I0320 08:48:38.748649 5136 scope.go:117] "RemoveContainer" containerID="4d16220fc9b1db88fdb1fbb167050afb3f65c942a2a02caf4ba1ec80a2858ccc" Mar 20 08:48:55 crc kubenswrapper[5136]: I0320 08:48:55.964962 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwt5p" event={"ID":"695202be-4633-411e-9afe-fd706e1cfbe6","Type":"ContainerStarted","Data":"44fe55ac22ffb29c918ccdbeaa595a57a885f7bfeba75321b48d4b09c0926e19"} Mar 20 08:48:57 crc kubenswrapper[5136]: I0320 08:48:57.980580 5136 generic.go:334] "Generic (PLEG): container finished" podID="695202be-4633-411e-9afe-fd706e1cfbe6" containerID="44fe55ac22ffb29c918ccdbeaa595a57a885f7bfeba75321b48d4b09c0926e19" exitCode=0 Mar 20 08:48:57 crc kubenswrapper[5136]: I0320 08:48:57.980698 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwt5p" event={"ID":"695202be-4633-411e-9afe-fd706e1cfbe6","Type":"ContainerDied","Data":"44fe55ac22ffb29c918ccdbeaa595a57a885f7bfeba75321b48d4b09c0926e19"} Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.309273 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374085 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88hl6\" (UniqueName: \"kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374186 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374225 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374263 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374435 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.375402 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.380399 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts" (OuterVolumeSpecName: "scripts") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.382008 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6" (OuterVolumeSpecName: "kube-api-access-88hl6") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "kube-api-access-88hl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.394058 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.429403 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.443538 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data" (OuterVolumeSpecName: "config-data") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.476327 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.477748 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88hl6\" (UniqueName: \"kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.477898 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.478009 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.478119 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.478221 5136 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.004950 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwt5p" event={"ID":"695202be-4633-411e-9afe-fd706e1cfbe6","Type":"ContainerDied","Data":"2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0"} Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.005006 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.005028 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.321857 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:49:00 crc kubenswrapper[5136]: E0320 08:49:00.322502 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695202be-4633-411e-9afe-fd706e1cfbe6" containerName="cinder-db-sync" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.322517 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="695202be-4633-411e-9afe-fd706e1cfbe6" containerName="cinder-db-sync" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.322671 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="695202be-4633-411e-9afe-fd706e1cfbe6" containerName="cinder-db-sync" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.323499 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.339939 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.394307 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.394397 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.394423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.394470 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hffqp\" (UniqueName: \"kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.394503 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.457135 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.458507 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.460173 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.460237 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.465020 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.468776 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bjz62" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.478715 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.496282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.497195 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.497515 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.497549 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.498466 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.498535 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hffqp\" (UniqueName: \"kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.498650 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.498921 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.499528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.518832 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hffqp\" (UniqueName: \"kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600386 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600450 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpz9s\" (UniqueName: \"kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600476 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600516 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600557 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600601 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600671 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.641200 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702579 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpz9s\" (UniqueName: \"kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702614 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702659 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702700 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702744 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702839 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702866 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.703558 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.703782 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.707056 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.707715 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.708420 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.715911 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.724550 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpz9s\" (UniqueName: \"kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.772024 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:01 crc kubenswrapper[5136]: I0320 08:49:01.163119 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:49:01 crc kubenswrapper[5136]: I0320 08:49:01.253013 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.023200 5136 generic.go:334] "Generic (PLEG): container finished" podID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerID="3ff6f40c02029e2b21fb76159d8a4a46d3d5ada3e12371991cb9ff0c2549f74e" exitCode=0 Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.023468 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" event={"ID":"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22","Type":"ContainerDied","Data":"3ff6f40c02029e2b21fb76159d8a4a46d3d5ada3e12371991cb9ff0c2549f74e"} Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.023547 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" event={"ID":"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22","Type":"ContainerStarted","Data":"1060eb1db927e589f382cb4a2cb4756b677bac6f172c644881bb0448e3071e35"} Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.029128 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerStarted","Data":"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db"} Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.029176 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerStarted","Data":"7f6b9df02974b1bde727c6a4b1dca2701c65ea0d314610d44c06bf87698b157b"} Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.367757 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.037495 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerStarted","Data":"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1"} Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.037862 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.037638 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api" containerID="cri-o://a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" gracePeriod=30 Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.037590 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api-log" containerID="cri-o://cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" gracePeriod=30 Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.040640 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" event={"ID":"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22","Type":"ContainerStarted","Data":"70907013083abb8f01ef74071f6df304ed026e839ff073ff1e663910330022e5"} Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.041416 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.059006 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.058987146 podStartE2EDuration="3.058987146s" podCreationTimestamp="2026-03-20 08:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:03.058557502 +0000 UTC m=+7175.317868663" watchObservedRunningTime="2026-03-20 08:49:03.058987146 +0000 UTC m=+7175.318298297" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.081208 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" podStartSLOduration=3.081189515 podStartE2EDuration="3.081189515s" podCreationTimestamp="2026-03-20 08:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:03.07460245 +0000 UTC m=+7175.333913621" watchObservedRunningTime="2026-03-20 08:49:03.081189515 +0000 UTC m=+7175.340500666" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.666742 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.757763 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758192 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.757912 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758251 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758314 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758353 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpz9s\" (UniqueName: \"kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758446 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758487 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758764 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs" (OuterVolumeSpecName: "logs") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758948 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758972 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.764239 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.765152 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts" (OuterVolumeSpecName: "scripts") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.767069 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s" (OuterVolumeSpecName: "kube-api-access-fpz9s") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "kube-api-access-fpz9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.783690 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.805332 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data" (OuterVolumeSpecName: "config-data") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.860180 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.860216 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.860227 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.860236 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpz9s\" (UniqueName: \"kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.860245 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051417 5136 generic.go:334] "Generic (PLEG): container finished" podID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerID="a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" exitCode=0 Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051448 5136 generic.go:334] "Generic (PLEG): container finished" podID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerID="cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" exitCode=143 Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051476 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051524 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerDied","Data":"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1"} Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051577 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerDied","Data":"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db"} Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerDied","Data":"7f6b9df02974b1bde727c6a4b1dca2701c65ea0d314610d44c06bf87698b157b"} Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051645 5136 scope.go:117] "RemoveContainer" containerID="a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.071226 5136 scope.go:117] "RemoveContainer" containerID="cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.086666 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.090233 5136 scope.go:117] "RemoveContainer" containerID="a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" Mar 20 08:49:04 crc kubenswrapper[5136]: E0320 08:49:04.090700 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1\": container with ID starting with a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1 not found: ID does not exist" containerID="a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.090760 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1"} err="failed to get container status \"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1\": rpc error: code = NotFound desc = could not find container \"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1\": container with ID starting with a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1 not found: ID does not exist" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.090779 5136 scope.go:117] "RemoveContainer" containerID="cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" Mar 20 08:49:04 crc kubenswrapper[5136]: E0320 08:49:04.093235 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db\": container with ID starting with cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db not found: ID does not exist" containerID="cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.093291 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db"} err="failed to get container status \"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db\": rpc error: code = NotFound desc = could not find container \"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db\": container with ID starting with cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db not found: ID does not exist" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.093308 5136 scope.go:117] "RemoveContainer" containerID="a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.093548 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1"} err="failed to get container status \"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1\": rpc error: code = NotFound desc = could not find container \"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1\": container with ID starting with a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1 not found: ID does not exist" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.093565 5136 scope.go:117] "RemoveContainer" containerID="cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.097135 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db"} err="failed to get container status \"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db\": rpc error: code = NotFound desc = could not find container \"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db\": container with ID starting with cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db not found: ID does not exist" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.100908 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.110794 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:04 crc kubenswrapper[5136]: E0320 08:49:04.111197 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api-log" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.111215 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api-log" Mar 20 08:49:04 crc kubenswrapper[5136]: E0320 08:49:04.111236 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.111245 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.111395 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api-log" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.111414 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.112274 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.115865 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.117669 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.119212 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.119459 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.119650 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bjz62" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.119961 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.121908 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165042 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165128 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165159 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165179 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165323 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165360 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165472 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165621 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rq6\" (UniqueName: \"kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165709 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267402 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267434 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267472 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267502 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267532 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267565 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267600 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267650 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4rq6\" (UniqueName: \"kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267979 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.268308 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.271492 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.271806 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.271975 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.272264 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.272512 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.273261 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.282417 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4rq6\" (UniqueName: \"kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.406597 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" path="/var/lib/kubelet/pods/fe8a3d90-f1a7-46e4-9ab9-c6332b728809/volumes" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.474616 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.897757 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:04 crc kubenswrapper[5136]: W0320 08:49:04.904479 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod899ba9fb_f6d6_4063_9489_482bdf8cb9c4.slice/crio-ae6df973bb937e0388d9cc5aabb53d0a185dae68f0fbc7da3126af71130384e1 WatchSource:0}: Error finding container ae6df973bb937e0388d9cc5aabb53d0a185dae68f0fbc7da3126af71130384e1: Status 404 returned error can't find the container with id ae6df973bb937e0388d9cc5aabb53d0a185dae68f0fbc7da3126af71130384e1 Mar 20 08:49:05 crc kubenswrapper[5136]: I0320 08:49:05.062535 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerStarted","Data":"ae6df973bb937e0388d9cc5aabb53d0a185dae68f0fbc7da3126af71130384e1"} Mar 20 08:49:06 crc kubenswrapper[5136]: I0320 08:49:06.084282 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerStarted","Data":"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994"} Mar 20 08:49:06 crc kubenswrapper[5136]: I0320 08:49:06.084719 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerStarted","Data":"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8"} Mar 20 08:49:06 crc kubenswrapper[5136]: I0320 08:49:06.085724 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 08:49:06 crc kubenswrapper[5136]: I0320 08:49:06.125469 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.12545163 podStartE2EDuration="2.12545163s" podCreationTimestamp="2026-03-20 08:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:06.107409011 +0000 UTC m=+7178.366720182" watchObservedRunningTime="2026-03-20 08:49:06.12545163 +0000 UTC m=+7178.384762781" Mar 20 08:49:10 crc kubenswrapper[5136]: I0320 08:49:10.642968 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:10 crc kubenswrapper[5136]: I0320 08:49:10.743473 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:49:10 crc kubenswrapper[5136]: I0320 08:49:10.743764 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="dnsmasq-dns" containerID="cri-o://fde2ab1fdb1e8fbccab29a1af1b697570f2345e5369a2aaab8edc51ff2fc185d" gracePeriod=10 Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.137923 5136 generic.go:334] "Generic (PLEG): container finished" podID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerID="fde2ab1fdb1e8fbccab29a1af1b697570f2345e5369a2aaab8edc51ff2fc185d" exitCode=0 Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.138454 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" event={"ID":"cad94756-feb3-42e4-8c87-b0cfb638edba","Type":"ContainerDied","Data":"fde2ab1fdb1e8fbccab29a1af1b697570f2345e5369a2aaab8edc51ff2fc185d"} Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.308119 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.504356 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb\") pod \"cad94756-feb3-42e4-8c87-b0cfb638edba\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.504864 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf7db\" (UniqueName: \"kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db\") pod \"cad94756-feb3-42e4-8c87-b0cfb638edba\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.505026 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config\") pod \"cad94756-feb3-42e4-8c87-b0cfb638edba\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.505249 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc\") pod \"cad94756-feb3-42e4-8c87-b0cfb638edba\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.505443 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb\") pod \"cad94756-feb3-42e4-8c87-b0cfb638edba\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.511727 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db" (OuterVolumeSpecName: "kube-api-access-xf7db") pod "cad94756-feb3-42e4-8c87-b0cfb638edba" (UID: "cad94756-feb3-42e4-8c87-b0cfb638edba"). InnerVolumeSpecName "kube-api-access-xf7db". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.543547 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cad94756-feb3-42e4-8c87-b0cfb638edba" (UID: "cad94756-feb3-42e4-8c87-b0cfb638edba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.559099 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cad94756-feb3-42e4-8c87-b0cfb638edba" (UID: "cad94756-feb3-42e4-8c87-b0cfb638edba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.564329 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cad94756-feb3-42e4-8c87-b0cfb638edba" (UID: "cad94756-feb3-42e4-8c87-b0cfb638edba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.578462 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config" (OuterVolumeSpecName: "config") pod "cad94756-feb3-42e4-8c87-b0cfb638edba" (UID: "cad94756-feb3-42e4-8c87-b0cfb638edba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.607206 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf7db\" (UniqueName: \"kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.607248 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.607260 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.607272 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.607282 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.148033 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" event={"ID":"cad94756-feb3-42e4-8c87-b0cfb638edba","Type":"ContainerDied","Data":"8ec3eac30ebcf6dd9046afae810790acb77c285633e8d4470487949417fb3311"} Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.148318 5136 scope.go:117] "RemoveContainer" containerID="fde2ab1fdb1e8fbccab29a1af1b697570f2345e5369a2aaab8edc51ff2fc185d" Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.148133 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.171023 5136 scope.go:117] "RemoveContainer" containerID="3fea7d5c06ec7715b7fc0e66f5644f5c5e237f2a08acb713c5d77dc706e25822" Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.189605 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.195674 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.410292 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" path="/var/lib/kubelet/pods/cad94756-feb3-42e4-8c87-b0cfb638edba/volumes" Mar 20 08:49:16 crc kubenswrapper[5136]: I0320 08:49:16.298404 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.060259 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:36 crc kubenswrapper[5136]: E0320 08:49:36.061008 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="dnsmasq-dns" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.061021 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="dnsmasq-dns" Mar 20 08:49:36 crc kubenswrapper[5136]: E0320 08:49:36.061034 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="init" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.061041 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="init" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.061177 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="dnsmasq-dns" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.062273 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.080606 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.214880 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msd9n\" (UniqueName: \"kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.214978 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.215051 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.316975 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.317107 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.317209 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msd9n\" (UniqueName: \"kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.317528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.317562 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.340776 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msd9n\" (UniqueName: \"kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.396711 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.935956 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.183051 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.184701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.188059 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.200754 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.350722 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97lqx\" (UniqueName: \"kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.351111 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.351165 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.351208 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.351244 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.351301 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.368403 5136 generic.go:334] "Generic (PLEG): container finished" podID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerID="97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491" exitCode=0 Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.368444 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerDied","Data":"97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491"} Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.368747 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerStarted","Data":"ad07a053e6003535fae2be960f8f7c86548891e11553064fa9be370310707a5f"} Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.370461 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452702 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97lqx\" (UniqueName: \"kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452803 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452863 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452927 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452967 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452998 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.458653 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.458692 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.460413 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.461681 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.470382 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97lqx\" (UniqueName: \"kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.503218 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.939996 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.390370 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerStarted","Data":"643f48d1ba3dddd8e33500673a9d69f560010b92aaf976b899d98ec860bc8aaf"} Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.427007 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerStarted","Data":"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5"} Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.834923 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.835523 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api-log" containerID="cri-o://2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8" gracePeriod=30 Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.836028 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api" containerID="cri-o://1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994" gracePeriod=30 Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.869104 5136 scope.go:117] "RemoveContainer" containerID="18976fde7b0e8720c3912ec558d2b411507101e156ae43c4e540472db0f27db1" Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.410743 5136 generic.go:334] "Generic (PLEG): container finished" podID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerID="2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8" exitCode=143 Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.411074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerDied","Data":"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8"} Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.413851 5136 generic.go:334] "Generic (PLEG): container finished" podID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerID="b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5" exitCode=0 Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.413895 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerDied","Data":"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5"} Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.413990 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerStarted","Data":"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957"} Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.426872 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerStarted","Data":"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad"} Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.445258 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-87jpl" podStartSLOduration=1.931598508 podStartE2EDuration="3.445240933s" podCreationTimestamp="2026-03-20 08:49:36 +0000 UTC" firstStartedPulling="2026-03-20 08:49:37.370147057 +0000 UTC m=+7209.629458208" lastFinishedPulling="2026-03-20 08:49:38.883789482 +0000 UTC m=+7211.143100633" observedRunningTime="2026-03-20 08:49:39.436023606 +0000 UTC m=+7211.695334757" watchObservedRunningTime="2026-03-20 08:49:39.445240933 +0000 UTC m=+7211.704552084" Mar 20 08:49:40 crc kubenswrapper[5136]: I0320 08:49:40.435786 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerStarted","Data":"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065"} Mar 20 08:49:40 crc kubenswrapper[5136]: I0320 08:49:40.459825 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.222389145 podStartE2EDuration="3.459791161s" podCreationTimestamp="2026-03-20 08:49:37 +0000 UTC" firstStartedPulling="2026-03-20 08:49:37.955777518 +0000 UTC m=+7210.215088669" lastFinishedPulling="2026-03-20 08:49:38.193179514 +0000 UTC m=+7210.452490685" observedRunningTime="2026-03-20 08:49:40.453620549 +0000 UTC m=+7212.712931690" watchObservedRunningTime="2026-03-20 08:49:40.459791161 +0000 UTC m=+7212.719102312" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.025933 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.101:8776/healthcheck\": read tcp 10.217.0.2:52180->10.217.1.101:8776: read: connection reset by peer" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.408182 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.456502 5136 generic.go:334] "Generic (PLEG): container finished" podID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerID="1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994" exitCode=0 Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.456942 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerDied","Data":"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994"} Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.457020 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerDied","Data":"ae6df973bb937e0388d9cc5aabb53d0a185dae68f0fbc7da3126af71130384e1"} Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.457045 5136 scope.go:117] "RemoveContainer" containerID="1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.457359 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.480436 5136 scope.go:117] "RemoveContainer" containerID="2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.500192 5136 scope.go:117] "RemoveContainer" containerID="1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994" Mar 20 08:49:42 crc kubenswrapper[5136]: E0320 08:49:42.500563 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994\": container with ID starting with 1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994 not found: ID does not exist" containerID="1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.500611 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994"} err="failed to get container status \"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994\": rpc error: code = NotFound desc = could not find container \"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994\": container with ID starting with 1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994 not found: ID does not exist" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.500632 5136 scope.go:117] "RemoveContainer" containerID="2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8" Mar 20 08:49:42 crc kubenswrapper[5136]: E0320 08:49:42.500866 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8\": container with ID starting with 2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8 not found: ID does not exist" containerID="2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.500904 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8"} err="failed to get container status \"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8\": rpc error: code = NotFound desc = could not find container \"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8\": container with ID starting with 2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8 not found: ID does not exist" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.504619 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.541745 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.541853 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.541876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4rq6\" (UniqueName: \"kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.541978 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542020 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542071 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542116 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542201 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542245 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542319 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs" (OuterVolumeSpecName: "logs") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.547797 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.547851 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.548057 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts" (OuterVolumeSpecName: "scripts") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.549405 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6" (OuterVolumeSpecName: "kube-api-access-s4rq6") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "kube-api-access-s4rq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.563970 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.564440 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.596917 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data" (OuterVolumeSpecName: "config-data") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.601425 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.617033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649316 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4rq6\" (UniqueName: \"kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649346 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649356 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649363 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649372 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649379 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649389 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.790320 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.798511 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.809365 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:42 crc kubenswrapper[5136]: E0320 08:49:42.809754 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.809775 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api" Mar 20 08:49:42 crc kubenswrapper[5136]: E0320 08:49:42.809826 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api-log" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.809835 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api-log" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.810016 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api-log" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.810040 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.811162 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.820672 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.820674 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.820717 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.825696 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953272 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953338 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953383 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953420 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953450 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953471 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953493 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953525 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg2m5\" (UniqueName: \"kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953605 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055604 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055664 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055709 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055743 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055779 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055805 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055848 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055883 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg2m5\" (UniqueName: \"kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055968 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.056006 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.056314 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.059594 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.060053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.060053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.061058 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.061150 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.061146 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.073069 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg2m5\" (UniqueName: \"kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.134625 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:44 crc kubenswrapper[5136]: I0320 08:49:44.179507 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:44 crc kubenswrapper[5136]: I0320 08:49:44.409638 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" path="/var/lib/kubelet/pods/899ba9fb-f6d6-4063-9489-482bdf8cb9c4/volumes" Mar 20 08:49:44 crc kubenswrapper[5136]: I0320 08:49:44.476347 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerStarted","Data":"e5d5c7c5c5992aa7583b39735a8b9b809168a5e579da62b57e92455fa830342d"} Mar 20 08:49:45 crc kubenswrapper[5136]: I0320 08:49:45.486308 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerStarted","Data":"23592f2e3f685cf11f8e09b90281731a317e3331b51d973536b5b6cf9ce01a69"} Mar 20 08:49:45 crc kubenswrapper[5136]: I0320 08:49:45.486619 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerStarted","Data":"99aa025dc61faebaa87d0e9d2a4856c44ddf012f862d7c369fe941dabbd9836f"} Mar 20 08:49:45 crc kubenswrapper[5136]: I0320 08:49:45.486780 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 08:49:45 crc kubenswrapper[5136]: I0320 08:49:45.512362 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.512339139 podStartE2EDuration="3.512339139s" podCreationTimestamp="2026-03-20 08:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:45.506473588 +0000 UTC m=+7217.765784739" watchObservedRunningTime="2026-03-20 08:49:45.512339139 +0000 UTC m=+7217.771650290" Mar 20 08:49:46 crc kubenswrapper[5136]: I0320 08:49:46.414157 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:46 crc kubenswrapper[5136]: I0320 08:49:46.414649 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:46 crc kubenswrapper[5136]: I0320 08:49:46.441249 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:46 crc kubenswrapper[5136]: I0320 08:49:46.537984 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:46 crc kubenswrapper[5136]: I0320 08:49:46.676706 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:47 crc kubenswrapper[5136]: I0320 08:49:47.711675 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 20 08:49:47 crc kubenswrapper[5136]: I0320 08:49:47.779945 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:48 crc kubenswrapper[5136]: I0320 08:49:48.511074 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-87jpl" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="registry-server" containerID="cri-o://a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957" gracePeriod=2 Mar 20 08:49:48 crc kubenswrapper[5136]: I0320 08:49:48.511218 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="probe" containerID="cri-o://e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065" gracePeriod=30 Mar 20 08:49:48 crc kubenswrapper[5136]: I0320 08:49:48.511167 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="cinder-scheduler" containerID="cri-o://a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad" gracePeriod=30 Mar 20 08:49:48 crc kubenswrapper[5136]: I0320 08:49:48.972461 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.061250 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities\") pod \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.061442 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content\") pod \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.061594 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msd9n\" (UniqueName: \"kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n\") pod \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.064088 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities" (OuterVolumeSpecName: "utilities") pod "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" (UID: "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.067668 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n" (OuterVolumeSpecName: "kube-api-access-msd9n") pod "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" (UID: "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b"). InnerVolumeSpecName "kube-api-access-msd9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.124319 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" (UID: "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.163847 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.163874 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.163885 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msd9n\" (UniqueName: \"kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.521470 5136 generic.go:334] "Generic (PLEG): container finished" podID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerID="a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957" exitCode=0 Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.521651 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerDied","Data":"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957"} Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.521858 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerDied","Data":"ad07a053e6003535fae2be960f8f7c86548891e11553064fa9be370310707a5f"} Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.521878 5136 scope.go:117] "RemoveContainer" containerID="a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.521720 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.524900 5136 generic.go:334] "Generic (PLEG): container finished" podID="302b747b-13f8-4339-b6bd-843625626b48" containerID="e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065" exitCode=0 Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.524941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerDied","Data":"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065"} Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.561311 5136 scope.go:117] "RemoveContainer" containerID="b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.570611 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.579247 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.584997 5136 scope.go:117] "RemoveContainer" containerID="97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.609861 5136 scope.go:117] "RemoveContainer" containerID="a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957" Mar 20 08:49:49 crc kubenswrapper[5136]: E0320 08:49:49.610285 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957\": container with ID starting with a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957 not found: ID does not exist" containerID="a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.610332 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957"} err="failed to get container status \"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957\": rpc error: code = NotFound desc = could not find container \"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957\": container with ID starting with a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957 not found: ID does not exist" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.610363 5136 scope.go:117] "RemoveContainer" containerID="b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5" Mar 20 08:49:49 crc kubenswrapper[5136]: E0320 08:49:49.610623 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5\": container with ID starting with b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5 not found: ID does not exist" containerID="b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.610659 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5"} err="failed to get container status \"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5\": rpc error: code = NotFound desc = could not find container \"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5\": container with ID starting with b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5 not found: ID does not exist" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.610678 5136 scope.go:117] "RemoveContainer" containerID="97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491" Mar 20 08:49:49 crc kubenswrapper[5136]: E0320 08:49:49.610886 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491\": container with ID starting with 97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491 not found: ID does not exist" containerID="97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.610921 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491"} err="failed to get container status \"97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491\": rpc error: code = NotFound desc = could not find container \"97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491\": container with ID starting with 97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491 not found: ID does not exist" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.854541 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978268 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978326 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978379 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978486 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978587 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978634 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97lqx\" (UniqueName: \"kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.980408 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.984933 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts" (OuterVolumeSpecName: "scripts") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.984980 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx" (OuterVolumeSpecName: "kube-api-access-97lqx") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "kube-api-access-97lqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.986181 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.031412 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.072332 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data" (OuterVolumeSpecName: "config-data") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080854 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080883 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080893 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97lqx\" (UniqueName: \"kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080905 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080913 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080922 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.404839 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" path="/var/lib/kubelet/pods/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b/volumes" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.546895 5136 generic.go:334] "Generic (PLEG): container finished" podID="302b747b-13f8-4339-b6bd-843625626b48" containerID="a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad" exitCode=0 Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.547242 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.547357 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerDied","Data":"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad"} Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.547535 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerDied","Data":"643f48d1ba3dddd8e33500673a9d69f560010b92aaf976b899d98ec860bc8aaf"} Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.547736 5136 scope.go:117] "RemoveContainer" containerID="e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.573012 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.580199 5136 scope.go:117] "RemoveContainer" containerID="a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.586729 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.596906 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.597364 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="probe" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597383 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="probe" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.597396 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="extract-content" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597403 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="extract-content" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.597418 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="registry-server" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597424 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="registry-server" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.597433 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="cinder-scheduler" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597439 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="cinder-scheduler" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.597459 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="extract-utilities" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597465 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="extract-utilities" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597663 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="cinder-scheduler" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597698 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="registry-server" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597712 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="probe" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.598781 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.608299 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.611443 5136 scope.go:117] "RemoveContainer" containerID="e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.612078 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065\": container with ID starting with e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065 not found: ID does not exist" containerID="e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.612122 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065"} err="failed to get container status \"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065\": rpc error: code = NotFound desc = could not find container \"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065\": container with ID starting with e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065 not found: ID does not exist" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.612180 5136 scope.go:117] "RemoveContainer" containerID="a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.612472 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad\": container with ID starting with a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad not found: ID does not exist" containerID="a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.612492 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad"} err="failed to get container status \"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad\": rpc error: code = NotFound desc = could not find container \"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad\": container with ID starting with a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad not found: ID does not exist" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.619708 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.694727 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.694845 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.694878 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.695043 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.695091 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.695372 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mw7n\" (UniqueName: \"kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.796895 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.796988 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.797029 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.797110 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.797151 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.797221 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.798778 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mw7n\" (UniqueName: \"kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.800891 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.802518 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.803162 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.806214 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.824665 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mw7n\" (UniqueName: \"kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.967445 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:51 crc kubenswrapper[5136]: I0320 08:49:51.462683 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:51 crc kubenswrapper[5136]: W0320 08:49:51.469475 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be786a7_1dee_4cfb_bada_4883a9326c71.slice/crio-902c62cf1140494c6f31a74b7c931afaffaa08d6e8a6315048461c8df99fb197 WatchSource:0}: Error finding container 902c62cf1140494c6f31a74b7c931afaffaa08d6e8a6315048461c8df99fb197: Status 404 returned error can't find the container with id 902c62cf1140494c6f31a74b7c931afaffaa08d6e8a6315048461c8df99fb197 Mar 20 08:49:51 crc kubenswrapper[5136]: I0320 08:49:51.565653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerStarted","Data":"902c62cf1140494c6f31a74b7c931afaffaa08d6e8a6315048461c8df99fb197"} Mar 20 08:49:52 crc kubenswrapper[5136]: I0320 08:49:52.408174 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="302b747b-13f8-4339-b6bd-843625626b48" path="/var/lib/kubelet/pods/302b747b-13f8-4339-b6bd-843625626b48/volumes" Mar 20 08:49:52 crc kubenswrapper[5136]: I0320 08:49:52.576135 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerStarted","Data":"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a"} Mar 20 08:49:52 crc kubenswrapper[5136]: I0320 08:49:52.576183 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerStarted","Data":"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77"} Mar 20 08:49:52 crc kubenswrapper[5136]: I0320 08:49:52.593996 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.593979385 podStartE2EDuration="2.593979385s" podCreationTimestamp="2026-03-20 08:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:52.591040304 +0000 UTC m=+7224.850351455" watchObservedRunningTime="2026-03-20 08:49:52.593979385 +0000 UTC m=+7224.853290536" Mar 20 08:49:54 crc kubenswrapper[5136]: I0320 08:49:54.927727 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 20 08:49:55 crc kubenswrapper[5136]: I0320 08:49:55.967569 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.147788 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566610-hrt5r"] Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.149747 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.153371 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.153693 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.161541 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.165246 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566610-hrt5r"] Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.290403 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjmh\" (UniqueName: \"kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh\") pod \"auto-csr-approver-29566610-hrt5r\" (UID: \"e312a5ea-3b15-4c57-8b2d-613840a5d9ca\") " pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.392432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmjmh\" (UniqueName: \"kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh\") pod \"auto-csr-approver-29566610-hrt5r\" (UID: \"e312a5ea-3b15-4c57-8b2d-613840a5d9ca\") " pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.427482 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmjmh\" (UniqueName: \"kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh\") pod \"auto-csr-approver-29566610-hrt5r\" (UID: \"e312a5ea-3b15-4c57-8b2d-613840a5d9ca\") " pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.475413 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:01 crc kubenswrapper[5136]: I0320 08:50:01.015251 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566610-hrt5r"] Mar 20 08:50:01 crc kubenswrapper[5136]: W0320 08:50:01.029366 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode312a5ea_3b15_4c57_8b2d_613840a5d9ca.slice/crio-e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1 WatchSource:0}: Error finding container e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1: Status 404 returned error can't find the container with id e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1 Mar 20 08:50:01 crc kubenswrapper[5136]: I0320 08:50:01.177670 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 20 08:50:01 crc kubenswrapper[5136]: I0320 08:50:01.668845 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" event={"ID":"e312a5ea-3b15-4c57-8b2d-613840a5d9ca","Type":"ContainerStarted","Data":"e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1"} Mar 20 08:50:03 crc kubenswrapper[5136]: I0320 08:50:03.688466 5136 generic.go:334] "Generic (PLEG): container finished" podID="e312a5ea-3b15-4c57-8b2d-613840a5d9ca" containerID="482a97e7c7d9733c356a59b74d29e1b51c08c0378829f0707d6918c34c51d893" exitCode=0 Mar 20 08:50:03 crc kubenswrapper[5136]: I0320 08:50:03.688567 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" event={"ID":"e312a5ea-3b15-4c57-8b2d-613840a5d9ca","Type":"ContainerDied","Data":"482a97e7c7d9733c356a59b74d29e1b51c08c0378829f0707d6918c34c51d893"} Mar 20 08:50:03 crc kubenswrapper[5136]: I0320 08:50:03.958292 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tr2s5"] Mar 20 08:50:03 crc kubenswrapper[5136]: I0320 08:50:03.959597 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:03 crc kubenswrapper[5136]: I0320 08:50:03.971728 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tr2s5"] Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.063890 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.063957 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml8b5\" (UniqueName: \"kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.065700 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6249-account-create-update-mrh6x"] Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.066780 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.068603 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.074291 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6249-account-create-update-mrh6x"] Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.165981 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.166035 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml8b5\" (UniqueName: \"kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.166076 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.166103 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhnn8\" (UniqueName: \"kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.166941 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.185073 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml8b5\" (UniqueName: \"kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.267981 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.268316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhnn8\" (UniqueName: \"kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.268874 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.275347 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.285431 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhnn8\" (UniqueName: \"kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.392561 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.714830 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tr2s5"] Mar 20 08:50:04 crc kubenswrapper[5136]: W0320 08:50:04.715999 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod570ecd59_555d_4f55_aed1_6fe547da30b1.slice/crio-b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db WatchSource:0}: Error finding container b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db: Status 404 returned error can't find the container with id b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.912713 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6249-account-create-update-mrh6x"] Mar 20 08:50:04 crc kubenswrapper[5136]: W0320 08:50:04.918245 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd07221a_a5f4_4a47_a7bf_354b0d432b27.slice/crio-627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b WatchSource:0}: Error finding container 627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b: Status 404 returned error can't find the container with id 627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.992419 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.081965 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmjmh\" (UniqueName: \"kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh\") pod \"e312a5ea-3b15-4c57-8b2d-613840a5d9ca\" (UID: \"e312a5ea-3b15-4c57-8b2d-613840a5d9ca\") " Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.086636 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh" (OuterVolumeSpecName: "kube-api-access-qmjmh") pod "e312a5ea-3b15-4c57-8b2d-613840a5d9ca" (UID: "e312a5ea-3b15-4c57-8b2d-613840a5d9ca"). InnerVolumeSpecName "kube-api-access-qmjmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.184637 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmjmh\" (UniqueName: \"kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.713161 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" event={"ID":"e312a5ea-3b15-4c57-8b2d-613840a5d9ca","Type":"ContainerDied","Data":"e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1"} Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.713185 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.713199 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1" Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.718417 5136 generic.go:334] "Generic (PLEG): container finished" podID="570ecd59-555d-4f55-aed1-6fe547da30b1" containerID="39d097d4e3a8458b775ea906bb0dd550fdd83b3369518a3cd12d9c26c24a8a02" exitCode=0 Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.718788 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tr2s5" event={"ID":"570ecd59-555d-4f55-aed1-6fe547da30b1","Type":"ContainerDied","Data":"39d097d4e3a8458b775ea906bb0dd550fdd83b3369518a3cd12d9c26c24a8a02"} Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.718832 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tr2s5" event={"ID":"570ecd59-555d-4f55-aed1-6fe547da30b1","Type":"ContainerStarted","Data":"b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db"} Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.720576 5136 generic.go:334] "Generic (PLEG): container finished" podID="fd07221a-a5f4-4a47-a7bf-354b0d432b27" containerID="62b91ae766226b0da7fe114136196e5dea194bad90be0b48d6f9d8c6e4102b25" exitCode=0 Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.720598 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6249-account-create-update-mrh6x" event={"ID":"fd07221a-a5f4-4a47-a7bf-354b0d432b27","Type":"ContainerDied","Data":"62b91ae766226b0da7fe114136196e5dea194bad90be0b48d6f9d8c6e4102b25"} Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.720612 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6249-account-create-update-mrh6x" event={"ID":"fd07221a-a5f4-4a47-a7bf-354b0d432b27","Type":"ContainerStarted","Data":"627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b"} Mar 20 08:50:06 crc kubenswrapper[5136]: I0320 08:50:06.060445 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566604-xrrdd"] Mar 20 08:50:06 crc kubenswrapper[5136]: I0320 08:50:06.068868 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566604-xrrdd"] Mar 20 08:50:06 crc kubenswrapper[5136]: I0320 08:50:06.410094 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372179a0-537a-4126-97c1-2d6a045e8798" path="/var/lib/kubelet/pods/372179a0-537a-4126-97c1-2d6a045e8798/volumes" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.062674 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.067111 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.124917 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml8b5\" (UniqueName: \"kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5\") pod \"570ecd59-555d-4f55-aed1-6fe547da30b1\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.125407 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhnn8\" (UniqueName: \"kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8\") pod \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.125468 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts\") pod \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.125507 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts\") pod \"570ecd59-555d-4f55-aed1-6fe547da30b1\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.126405 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd07221a-a5f4-4a47-a7bf-354b0d432b27" (UID: "fd07221a-a5f4-4a47-a7bf-354b0d432b27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.126462 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "570ecd59-555d-4f55-aed1-6fe547da30b1" (UID: "570ecd59-555d-4f55-aed1-6fe547da30b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.132927 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5" (OuterVolumeSpecName: "kube-api-access-ml8b5") pod "570ecd59-555d-4f55-aed1-6fe547da30b1" (UID: "570ecd59-555d-4f55-aed1-6fe547da30b1"). InnerVolumeSpecName "kube-api-access-ml8b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.147787 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8" (OuterVolumeSpecName: "kube-api-access-fhnn8") pod "fd07221a-a5f4-4a47-a7bf-354b0d432b27" (UID: "fd07221a-a5f4-4a47-a7bf-354b0d432b27"). InnerVolumeSpecName "kube-api-access-fhnn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.227738 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhnn8\" (UniqueName: \"kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.227776 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.227791 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.227801 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml8b5\" (UniqueName: \"kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.735876 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6249-account-create-update-mrh6x" event={"ID":"fd07221a-a5f4-4a47-a7bf-354b0d432b27","Type":"ContainerDied","Data":"627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b"} Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.735922 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.735936 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.737129 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tr2s5" event={"ID":"570ecd59-555d-4f55-aed1-6fe547da30b1","Type":"ContainerDied","Data":"b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db"} Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.737149 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.737196 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.247784 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dlmp5"] Mar 20 08:50:09 crc kubenswrapper[5136]: E0320 08:50:09.252439 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570ecd59-555d-4f55-aed1-6fe547da30b1" containerName="mariadb-database-create" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252466 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="570ecd59-555d-4f55-aed1-6fe547da30b1" containerName="mariadb-database-create" Mar 20 08:50:09 crc kubenswrapper[5136]: E0320 08:50:09.252496 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd07221a-a5f4-4a47-a7bf-354b0d432b27" containerName="mariadb-account-create-update" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252506 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd07221a-a5f4-4a47-a7bf-354b0d432b27" containerName="mariadb-account-create-update" Mar 20 08:50:09 crc kubenswrapper[5136]: E0320 08:50:09.252524 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e312a5ea-3b15-4c57-8b2d-613840a5d9ca" containerName="oc" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252533 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e312a5ea-3b15-4c57-8b2d-613840a5d9ca" containerName="oc" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252772 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e312a5ea-3b15-4c57-8b2d-613840a5d9ca" containerName="oc" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252804 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd07221a-a5f4-4a47-a7bf-354b0d432b27" containerName="mariadb-account-create-update" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252860 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="570ecd59-555d-4f55-aed1-6fe547da30b1" containerName="mariadb-database-create" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.253572 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.256162 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zsfgx" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.258164 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dlmp5"] Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.259459 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.360617 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.360708 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.360783 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjrdh\" (UniqueName: \"kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.360827 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.462504 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.462748 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjrdh\" (UniqueName: \"kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.462794 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.463127 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.475698 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.476034 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.476483 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.484356 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjrdh\" (UniqueName: \"kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.598070 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:10 crc kubenswrapper[5136]: I0320 08:50:10.225537 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dlmp5"] Mar 20 08:50:10 crc kubenswrapper[5136]: I0320 08:50:10.766493 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dlmp5" event={"ID":"d0757343-a168-444b-ab9f-eb32dc3e416a","Type":"ContainerStarted","Data":"4b2eb053d92eb0ee6c5af951975f69f5603c187dedf27f124fbc5e104145fc8e"} Mar 20 08:50:26 crc kubenswrapper[5136]: I0320 08:50:26.929595 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dlmp5" event={"ID":"d0757343-a168-444b-ab9f-eb32dc3e416a","Type":"ContainerStarted","Data":"ea4b393a20ea1ece97f36d015f4602f5e94b839ad564485cfca078d956bd0138"} Mar 20 08:50:26 crc kubenswrapper[5136]: I0320 08:50:26.948790 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dlmp5" podStartSLOduration=2.36994352 podStartE2EDuration="17.948771606s" podCreationTimestamp="2026-03-20 08:50:09 +0000 UTC" firstStartedPulling="2026-03-20 08:50:10.227788819 +0000 UTC m=+7242.487099970" lastFinishedPulling="2026-03-20 08:50:25.806616905 +0000 UTC m=+7258.065928056" observedRunningTime="2026-03-20 08:50:26.947649242 +0000 UTC m=+7259.206960433" watchObservedRunningTime="2026-03-20 08:50:26.948771606 +0000 UTC m=+7259.208082757" Mar 20 08:50:29 crc kubenswrapper[5136]: I0320 08:50:29.953162 5136 generic.go:334] "Generic (PLEG): container finished" podID="d0757343-a168-444b-ab9f-eb32dc3e416a" containerID="ea4b393a20ea1ece97f36d015f4602f5e94b839ad564485cfca078d956bd0138" exitCode=0 Mar 20 08:50:29 crc kubenswrapper[5136]: I0320 08:50:29.953255 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dlmp5" event={"ID":"d0757343-a168-444b-ab9f-eb32dc3e416a","Type":"ContainerDied","Data":"ea4b393a20ea1ece97f36d015f4602f5e94b839ad564485cfca078d956bd0138"} Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.427303 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.590975 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle\") pod \"d0757343-a168-444b-ab9f-eb32dc3e416a\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.591056 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data\") pod \"d0757343-a168-444b-ab9f-eb32dc3e416a\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.591139 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data\") pod \"d0757343-a168-444b-ab9f-eb32dc3e416a\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.591369 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjrdh\" (UniqueName: \"kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh\") pod \"d0757343-a168-444b-ab9f-eb32dc3e416a\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.602937 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh" (OuterVolumeSpecName: "kube-api-access-rjrdh") pod "d0757343-a168-444b-ab9f-eb32dc3e416a" (UID: "d0757343-a168-444b-ab9f-eb32dc3e416a"). InnerVolumeSpecName "kube-api-access-rjrdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.617446 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d0757343-a168-444b-ab9f-eb32dc3e416a" (UID: "d0757343-a168-444b-ab9f-eb32dc3e416a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.628021 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0757343-a168-444b-ab9f-eb32dc3e416a" (UID: "d0757343-a168-444b-ab9f-eb32dc3e416a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.649121 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data" (OuterVolumeSpecName: "config-data") pod "d0757343-a168-444b-ab9f-eb32dc3e416a" (UID: "d0757343-a168-444b-ab9f-eb32dc3e416a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.694781 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjrdh\" (UniqueName: \"kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.694840 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.694853 5136 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.694866 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.972203 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dlmp5" event={"ID":"d0757343-a168-444b-ab9f-eb32dc3e416a","Type":"ContainerDied","Data":"4b2eb053d92eb0ee6c5af951975f69f5603c187dedf27f124fbc5e104145fc8e"} Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.972242 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b2eb053d92eb0ee6c5af951975f69f5603c187dedf27f124fbc5e104145fc8e" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.972314 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.279779 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:32 crc kubenswrapper[5136]: E0320 08:50:32.280161 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0757343-a168-444b-ab9f-eb32dc3e416a" containerName="glance-db-sync" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.280185 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0757343-a168-444b-ab9f-eb32dc3e416a" containerName="glance-db-sync" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.280334 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0757343-a168-444b-ab9f-eb32dc3e416a" containerName="glance-db-sync" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.281238 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.286868 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.286922 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zsfgx" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.287125 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.296772 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405278 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gfjz\" (UniqueName: \"kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405348 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405372 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405405 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405438 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.420625 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.422520 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.451951 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.493132 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.495845 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.501486 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.504643 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4tll\" (UniqueName: \"kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509220 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gfjz\" (UniqueName: \"kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509264 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509298 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqpm\" (UniqueName: \"kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509318 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509345 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509384 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509404 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509427 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509454 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509479 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509507 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509547 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509588 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509631 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509652 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509679 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.510446 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.510734 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.515212 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.515317 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.517406 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.563931 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gfjz\" (UniqueName: \"kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.603906 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611767 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611825 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611876 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4tll\" (UniqueName: \"kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611904 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611929 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqpm\" (UniqueName: \"kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611946 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611973 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612003 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612028 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612065 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612098 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612364 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612397 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.613120 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.613569 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.613929 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.614958 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.618765 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.620365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.624554 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.629760 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4tll\" (UniqueName: \"kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.637553 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqpm\" (UniqueName: \"kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.745272 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.825536 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:33 crc kubenswrapper[5136]: I0320 08:50:33.246548 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:33 crc kubenswrapper[5136]: I0320 08:50:33.269017 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:50:33 crc kubenswrapper[5136]: W0320 08:50:33.270715 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddde4894e_af50_4137_8cb4_469a0363b248.slice/crio-82b5c6a87292a840c7626e0d30263221c4e7065fd0f2c1e7d5b5956e5a5ba695 WatchSource:0}: Error finding container 82b5c6a87292a840c7626e0d30263221c4e7065fd0f2c1e7d5b5956e5a5ba695: Status 404 returned error can't find the container with id 82b5c6a87292a840c7626e0d30263221c4e7065fd0f2c1e7d5b5956e5a5ba695 Mar 20 08:50:33 crc kubenswrapper[5136]: I0320 08:50:33.455581 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.001177 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerStarted","Data":"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf"} Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.001415 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerStarted","Data":"2524d6e15b5de7a3de1e6beebb9c23779a7eb1301d5ef55a2709fe103508da76"} Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.025933 5136 generic.go:334] "Generic (PLEG): container finished" podID="dde4894e-af50-4137-8cb4-469a0363b248" containerID="38d62cb7f741d8ee572a6c46fb9a977b9c469e6392e17bbc74b7fe94c516a377" exitCode=0 Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.026275 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" event={"ID":"dde4894e-af50-4137-8cb4-469a0363b248","Type":"ContainerDied","Data":"38d62cb7f741d8ee572a6c46fb9a977b9c469e6392e17bbc74b7fe94c516a377"} Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.026309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" event={"ID":"dde4894e-af50-4137-8cb4-469a0363b248","Type":"ContainerStarted","Data":"82b5c6a87292a840c7626e0d30263221c4e7065fd0f2c1e7d5b5956e5a5ba695"} Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.027557 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.060639 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" event={"ID":"dde4894e-af50-4137-8cb4-469a0363b248","Type":"ContainerStarted","Data":"0ebc36c7db7ba6a4fd167019fee7e33605b31fe2474b4deaee19fad2dbe59690"} Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.061297 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.068382 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerStarted","Data":"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3"} Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.068650 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerStarted","Data":"636e3949d027a18021c284e372d93d822e803b424afcc4fb424553c251ed3c72"} Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.076017 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerStarted","Data":"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5"} Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.076154 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-log" containerID="cri-o://b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" gracePeriod=30 Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.076411 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-httpd" containerID="cri-o://edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" gracePeriod=30 Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.100778 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" podStartSLOduration=3.100757376 podStartE2EDuration="3.100757376s" podCreationTimestamp="2026-03-20 08:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:35.088605959 +0000 UTC m=+7267.347917120" watchObservedRunningTime="2026-03-20 08:50:35.100757376 +0000 UTC m=+7267.360068527" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.115337 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.115318266 podStartE2EDuration="3.115318266s" podCreationTimestamp="2026-03-20 08:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:35.107940467 +0000 UTC m=+7267.367251608" watchObservedRunningTime="2026-03-20 08:50:35.115318266 +0000 UTC m=+7267.374629417" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.673648 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.836273 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984344 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984397 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984463 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gfjz\" (UniqueName: \"kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984508 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984559 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984636 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs" (OuterVolumeSpecName: "logs") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984972 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.985106 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.985124 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.994950 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts" (OuterVolumeSpecName: "scripts") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.024709 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz" (OuterVolumeSpecName: "kube-api-access-8gfjz") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "kube-api-access-8gfjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.030453 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.056998 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data" (OuterVolumeSpecName: "config-data") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.086147 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.086173 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.086157 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerStarted","Data":"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69"} Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.086183 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gfjz\" (UniqueName: \"kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.086242 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.088538 5136 generic.go:334] "Generic (PLEG): container finished" podID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerID="edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" exitCode=0 Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.088563 5136 generic.go:334] "Generic (PLEG): container finished" podID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerID="b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" exitCode=143 Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.088826 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.089093 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerDied","Data":"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5"} Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.089126 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerDied","Data":"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf"} Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.089170 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerDied","Data":"2524d6e15b5de7a3de1e6beebb9c23779a7eb1301d5ef55a2709fe103508da76"} Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.089192 5136 scope.go:117] "RemoveContainer" containerID="edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.114433 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.114414698 podStartE2EDuration="4.114414698s" podCreationTimestamp="2026-03-20 08:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:36.110462036 +0000 UTC m=+7268.369773187" watchObservedRunningTime="2026-03-20 08:50:36.114414698 +0000 UTC m=+7268.373725849" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.126062 5136 scope.go:117] "RemoveContainer" containerID="b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.141348 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.149044 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.153533 5136 scope.go:117] "RemoveContainer" containerID="edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" Mar 20 08:50:36 crc kubenswrapper[5136]: E0320 08:50:36.153908 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5\": container with ID starting with edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5 not found: ID does not exist" containerID="edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.153937 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5"} err="failed to get container status \"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5\": rpc error: code = NotFound desc = could not find container \"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5\": container with ID starting with edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5 not found: ID does not exist" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.153957 5136 scope.go:117] "RemoveContainer" containerID="b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" Mar 20 08:50:36 crc kubenswrapper[5136]: E0320 08:50:36.154586 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf\": container with ID starting with b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf not found: ID does not exist" containerID="b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.154616 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf"} err="failed to get container status \"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf\": rpc error: code = NotFound desc = could not find container \"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf\": container with ID starting with b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf not found: ID does not exist" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.154630 5136 scope.go:117] "RemoveContainer" containerID="edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.155336 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5"} err="failed to get container status \"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5\": rpc error: code = NotFound desc = could not find container \"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5\": container with ID starting with edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5 not found: ID does not exist" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.155358 5136 scope.go:117] "RemoveContainer" containerID="b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.155558 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf"} err="failed to get container status \"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf\": rpc error: code = NotFound desc = could not find container \"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf\": container with ID starting with b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf not found: ID does not exist" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.160142 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:36 crc kubenswrapper[5136]: E0320 08:50:36.160559 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-httpd" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.160579 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-httpd" Mar 20 08:50:36 crc kubenswrapper[5136]: E0320 08:50:36.160606 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-log" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.160615 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-log" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.160806 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-log" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.163802 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-httpd" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.165106 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.170015 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.170401 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.179671 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289418 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b6rl\" (UniqueName: \"kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289486 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289608 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289673 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289733 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289883 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289916 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.391355 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b6rl\" (UniqueName: \"kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.391709 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.392228 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.392296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.392329 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.392994 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.393058 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.392663 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.393410 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.396174 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.396836 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.397320 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.398004 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.407629 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" path="/var/lib/kubelet/pods/a069cdd1-a76e-4977-b511-1776284ad9ba/volumes" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.415641 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b6rl\" (UniqueName: \"kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.481980 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:37 crc kubenswrapper[5136]: I0320 08:50:37.100632 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-log" containerID="cri-o://4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" gracePeriod=30 Mar 20 08:50:37 crc kubenswrapper[5136]: I0320 08:50:37.101189 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-httpd" containerID="cri-o://efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" gracePeriod=30 Mar 20 08:50:37 crc kubenswrapper[5136]: I0320 08:50:37.102848 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:37 crc kubenswrapper[5136]: I0320 08:50:37.932987 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026436 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026487 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026535 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4tll\" (UniqueName: \"kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026554 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026572 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026640 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.027291 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs" (OuterVolumeSpecName: "logs") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.027316 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.047495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll" (OuterVolumeSpecName: "kube-api-access-f4tll") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "kube-api-access-f4tll". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.048616 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts" (OuterVolumeSpecName: "scripts") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.057607 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.075240 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data" (OuterVolumeSpecName: "config-data") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130583 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130622 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130636 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4tll\" (UniqueName: \"kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130648 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130667 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130679 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130901 5136 generic.go:334] "Generic (PLEG): container finished" podID="97199128-8701-401c-bb22-55e0f0239271" containerID="efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" exitCode=0 Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130942 5136 generic.go:334] "Generic (PLEG): container finished" podID="97199128-8701-401c-bb22-55e0f0239271" containerID="4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" exitCode=143 Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.133199 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerDied","Data":"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69"} Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.133245 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerDied","Data":"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3"} Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.133261 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerDied","Data":"636e3949d027a18021c284e372d93d822e803b424afcc4fb424553c251ed3c72"} Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.133323 5136 scope.go:117] "RemoveContainer" containerID="efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.133561 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.155556 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerStarted","Data":"de0681385b751e6fbc3db0d7c1a647f36f508772ed5c4a96334e28fe70febb26"} Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.155610 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerStarted","Data":"18bbdbf6b4b7085096b8a4c5650b4a999121b8fffe8ad31c3a29f6c89c1e9ff8"} Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.194205 5136 scope.go:117] "RemoveContainer" containerID="4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.194536 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.206532 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.214656 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:38 crc kubenswrapper[5136]: E0320 08:50:38.215132 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-httpd" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.215152 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-httpd" Mar 20 08:50:38 crc kubenswrapper[5136]: E0320 08:50:38.215176 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-log" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.215183 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-log" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.215336 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-httpd" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.215363 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-log" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.216345 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.218486 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.218735 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.224951 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.247285 5136 scope.go:117] "RemoveContainer" containerID="efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" Mar 20 08:50:38 crc kubenswrapper[5136]: E0320 08:50:38.248736 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69\": container with ID starting with efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69 not found: ID does not exist" containerID="efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.248784 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69"} err="failed to get container status \"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69\": rpc error: code = NotFound desc = could not find container \"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69\": container with ID starting with efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69 not found: ID does not exist" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.248895 5136 scope.go:117] "RemoveContainer" containerID="4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" Mar 20 08:50:38 crc kubenswrapper[5136]: E0320 08:50:38.249905 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3\": container with ID starting with 4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3 not found: ID does not exist" containerID="4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.249949 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3"} err="failed to get container status \"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3\": rpc error: code = NotFound desc = could not find container \"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3\": container with ID starting with 4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3 not found: ID does not exist" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.249969 5136 scope.go:117] "RemoveContainer" containerID="efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.251105 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69"} err="failed to get container status \"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69\": rpc error: code = NotFound desc = could not find container \"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69\": container with ID starting with efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69 not found: ID does not exist" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.251140 5136 scope.go:117] "RemoveContainer" containerID="4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.254328 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3"} err="failed to get container status \"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3\": rpc error: code = NotFound desc = could not find container \"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3\": container with ID starting with 4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3 not found: ID does not exist" Mar 20 08:50:38 crc kubenswrapper[5136]: E0320 08:50:38.267132 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97199128_8701_401c_bb22_55e0f0239271.slice\": RecentStats: unable to find data in memory cache]" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333414 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333573 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333636 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333659 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333730 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26sf\" (UniqueName: \"kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333777 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.409758 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97199128-8701-401c-bb22-55e0f0239271" path="/var/lib/kubelet/pods/97199128-8701-401c-bb22-55e0f0239271/volumes" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435108 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435144 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435192 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l26sf\" (UniqueName: \"kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435229 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435297 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435389 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.436747 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.436842 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.441055 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.441868 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.442008 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.442287 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.456205 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l26sf\" (UniqueName: \"kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.555571 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.051541 5136 scope.go:117] "RemoveContainer" containerID="a7d9dee7dfd341c20d54bcc9a10648dd04c5eaeec50a978661f3c530263c499e" Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.054732 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.130321 5136 scope.go:117] "RemoveContainer" containerID="c0135a379aa43c0ef1ad29a602be6db0384857bc32c56cdc2b2d0040cfa4649a" Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.168081 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerStarted","Data":"0a442b375725a08359ac9c238f48642a4c758f6fef43750c9ef6734e62c274b1"} Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.175072 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerStarted","Data":"10a9b30b7e2523cc2c1ff704e0f7c5ae56f024de6f82bf322b7b1d7e0001c420"} Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.205801 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.205779328 podStartE2EDuration="3.205779328s" podCreationTimestamp="2026-03-20 08:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:39.201197596 +0000 UTC m=+7271.460508747" watchObservedRunningTime="2026-03-20 08:50:39.205779328 +0000 UTC m=+7271.465090479" Mar 20 08:50:40 crc kubenswrapper[5136]: I0320 08:50:40.193546 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerStarted","Data":"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c"} Mar 20 08:50:40 crc kubenswrapper[5136]: I0320 08:50:40.194138 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerStarted","Data":"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3"} Mar 20 08:50:40 crc kubenswrapper[5136]: I0320 08:50:40.216925 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.216910343 podStartE2EDuration="2.216910343s" podCreationTimestamp="2026-03-20 08:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:40.213296261 +0000 UTC m=+7272.472607412" watchObservedRunningTime="2026-03-20 08:50:40.216910343 +0000 UTC m=+7272.476221494" Mar 20 08:50:42 crc kubenswrapper[5136]: I0320 08:50:42.748035 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:42 crc kubenswrapper[5136]: I0320 08:50:42.817806 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:50:42 crc kubenswrapper[5136]: I0320 08:50:42.818056 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="dnsmasq-dns" containerID="cri-o://70907013083abb8f01ef74071f6df304ed026e839ff073ff1e663910330022e5" gracePeriod=10 Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.232345 5136 generic.go:334] "Generic (PLEG): container finished" podID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerID="70907013083abb8f01ef74071f6df304ed026e839ff073ff1e663910330022e5" exitCode=0 Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.232560 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" event={"ID":"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22","Type":"ContainerDied","Data":"70907013083abb8f01ef74071f6df304ed026e839ff073ff1e663910330022e5"} Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.284257 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.337864 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config\") pod \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.337929 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hffqp\" (UniqueName: \"kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp\") pod \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.337976 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb\") pod \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.338116 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb\") pod \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.338168 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc\") pod \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.343828 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp" (OuterVolumeSpecName: "kube-api-access-hffqp") pod "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" (UID: "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22"). InnerVolumeSpecName "kube-api-access-hffqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.348644 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hffqp\" (UniqueName: \"kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.380531 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" (UID: "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.380599 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" (UID: "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.385841 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config" (OuterVolumeSpecName: "config") pod "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" (UID: "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.387010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" (UID: "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.450345 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.450386 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.450399 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.450410 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.249224 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" event={"ID":"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22","Type":"ContainerDied","Data":"1060eb1db927e589f382cb4a2cb4756b677bac6f172c644881bb0448e3071e35"} Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.249301 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.249608 5136 scope.go:117] "RemoveContainer" containerID="70907013083abb8f01ef74071f6df304ed026e839ff073ff1e663910330022e5" Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.291740 5136 scope.go:117] "RemoveContainer" containerID="3ff6f40c02029e2b21fb76159d8a4a46d3d5ada3e12371991cb9ff0c2549f74e" Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.297981 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.314651 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.411049 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" path="/var/lib/kubelet/pods/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22/volumes" Mar 20 08:50:45 crc kubenswrapper[5136]: I0320 08:50:45.822507 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:50:45 crc kubenswrapper[5136]: I0320 08:50:45.822597 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:50:46 crc kubenswrapper[5136]: I0320 08:50:46.482776 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 08:50:46 crc kubenswrapper[5136]: I0320 08:50:46.483136 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 08:50:46 crc kubenswrapper[5136]: I0320 08:50:46.531632 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 08:50:46 crc kubenswrapper[5136]: I0320 08:50:46.531737 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 08:50:47 crc kubenswrapper[5136]: I0320 08:50:47.282136 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 08:50:47 crc kubenswrapper[5136]: I0320 08:50:47.282220 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 08:50:48 crc kubenswrapper[5136]: I0320 08:50:48.744004 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-dzzhq" podUID="4c981a48-1ae6-4c06-90ed-4333de6a14d2" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.53:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:50:49 crc kubenswrapper[5136]: I0320 08:50:49.430379 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:49 crc kubenswrapper[5136]: I0320 08:50:49.430735 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:49 crc kubenswrapper[5136]: I0320 08:50:49.456643 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:49 crc kubenswrapper[5136]: I0320 08:50:49.458681 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:50 crc kubenswrapper[5136]: I0320 08:50:50.443490 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:50 crc kubenswrapper[5136]: I0320 08:50:50.443765 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:50 crc kubenswrapper[5136]: I0320 08:50:50.530262 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 08:50:50 crc kubenswrapper[5136]: I0320 08:50:50.530451 5136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:50:50 crc kubenswrapper[5136]: I0320 08:50:50.546076 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 08:50:52 crc kubenswrapper[5136]: I0320 08:50:52.418737 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:52 crc kubenswrapper[5136]: I0320 08:50:52.446954 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.421138 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-85wqc"] Mar 20 08:51:00 crc kubenswrapper[5136]: E0320 08:51:00.421921 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="init" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.421935 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="init" Mar 20 08:51:00 crc kubenswrapper[5136]: E0320 08:51:00.421971 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="dnsmasq-dns" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.421977 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="dnsmasq-dns" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.422140 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="dnsmasq-dns" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.422643 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.434931 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-85wqc"] Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.517391 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e4e3-account-create-update-htnkq"] Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.518414 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.520317 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.523327 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.523365 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4fl\" (UniqueName: \"kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.531656 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e4e3-account-create-update-htnkq"] Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.624655 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.624712 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26g8n\" (UniqueName: \"kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.624917 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.624996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg4fl\" (UniqueName: \"kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.625667 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.646629 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg4fl\" (UniqueName: \"kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.727136 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.727211 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26g8n\" (UniqueName: \"kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.729842 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.740136 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.745710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26g8n\" (UniqueName: \"kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.834645 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.206095 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-85wqc"] Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.288172 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e4e3-account-create-update-htnkq"] Mar 20 08:51:01 crc kubenswrapper[5136]: W0320 08:51:01.292777 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb48a8f95_9236_458f_a8ab_fb15f6878172.slice/crio-5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb WatchSource:0}: Error finding container 5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb: Status 404 returned error can't find the container with id 5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.555320 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e4e3-account-create-update-htnkq" event={"ID":"b48a8f95-9236-458f-a8ab-fb15f6878172","Type":"ContainerStarted","Data":"f2ac24f272d6a9df55f1b17c9f403e8fc1875096d56818de2641768b249208a8"} Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.555374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e4e3-account-create-update-htnkq" event={"ID":"b48a8f95-9236-458f-a8ab-fb15f6878172","Type":"ContainerStarted","Data":"5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb"} Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.559542 5136 generic.go:334] "Generic (PLEG): container finished" podID="2204982c-c8aa-4b18-a455-71915264f644" containerID="97846ae11696236889350b3c9161e329e32ca1f71469f4c6bc5cd1b32b64434b" exitCode=0 Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.559579 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-85wqc" event={"ID":"2204982c-c8aa-4b18-a455-71915264f644","Type":"ContainerDied","Data":"97846ae11696236889350b3c9161e329e32ca1f71469f4c6bc5cd1b32b64434b"} Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.559602 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-85wqc" event={"ID":"2204982c-c8aa-4b18-a455-71915264f644","Type":"ContainerStarted","Data":"27c28f71bc6ec290e2c720dfcc5c38d23cb3d9f05968c5658b2c6d4079823d85"} Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.583636 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-e4e3-account-create-update-htnkq" podStartSLOduration=1.583618223 podStartE2EDuration="1.583618223s" podCreationTimestamp="2026-03-20 08:51:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:01.572582162 +0000 UTC m=+7293.831893313" watchObservedRunningTime="2026-03-20 08:51:01.583618223 +0000 UTC m=+7293.842929374" Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.574788 5136 generic.go:334] "Generic (PLEG): container finished" podID="b48a8f95-9236-458f-a8ab-fb15f6878172" containerID="f2ac24f272d6a9df55f1b17c9f403e8fc1875096d56818de2641768b249208a8" exitCode=0 Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.574864 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e4e3-account-create-update-htnkq" event={"ID":"b48a8f95-9236-458f-a8ab-fb15f6878172","Type":"ContainerDied","Data":"f2ac24f272d6a9df55f1b17c9f403e8fc1875096d56818de2641768b249208a8"} Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.879034 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85wqc" Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.966286 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg4fl\" (UniqueName: \"kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl\") pod \"2204982c-c8aa-4b18-a455-71915264f644\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.966488 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts\") pod \"2204982c-c8aa-4b18-a455-71915264f644\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.967533 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2204982c-c8aa-4b18-a455-71915264f644" (UID: "2204982c-c8aa-4b18-a455-71915264f644"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.973912 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl" (OuterVolumeSpecName: "kube-api-access-sg4fl") pod "2204982c-c8aa-4b18-a455-71915264f644" (UID: "2204982c-c8aa-4b18-a455-71915264f644"). InnerVolumeSpecName "kube-api-access-sg4fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.069142 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg4fl\" (UniqueName: \"kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.069425 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.594008 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-85wqc" event={"ID":"2204982c-c8aa-4b18-a455-71915264f644","Type":"ContainerDied","Data":"27c28f71bc6ec290e2c720dfcc5c38d23cb3d9f05968c5658b2c6d4079823d85"} Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.597006 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27c28f71bc6ec290e2c720dfcc5c38d23cb3d9f05968c5658b2c6d4079823d85" Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.594074 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85wqc" Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.967805 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.091900 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts\") pod \"b48a8f95-9236-458f-a8ab-fb15f6878172\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.092051 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26g8n\" (UniqueName: \"kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n\") pod \"b48a8f95-9236-458f-a8ab-fb15f6878172\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.092495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b48a8f95-9236-458f-a8ab-fb15f6878172" (UID: "b48a8f95-9236-458f-a8ab-fb15f6878172"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.096033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n" (OuterVolumeSpecName: "kube-api-access-26g8n") pod "b48a8f95-9236-458f-a8ab-fb15f6878172" (UID: "b48a8f95-9236-458f-a8ab-fb15f6878172"). InnerVolumeSpecName "kube-api-access-26g8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.194711 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.194764 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26g8n\" (UniqueName: \"kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.605920 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e4e3-account-create-update-htnkq" event={"ID":"b48a8f95-9236-458f-a8ab-fb15f6878172","Type":"ContainerDied","Data":"5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb"} Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.605955 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.606007 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.071168 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:51:06 crc kubenswrapper[5136]: E0320 08:51:06.071933 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48a8f95-9236-458f-a8ab-fb15f6878172" containerName="mariadb-account-create-update" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.071954 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48a8f95-9236-458f-a8ab-fb15f6878172" containerName="mariadb-account-create-update" Mar 20 08:51:06 crc kubenswrapper[5136]: E0320 08:51:06.071996 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2204982c-c8aa-4b18-a455-71915264f644" containerName="mariadb-database-create" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.072005 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2204982c-c8aa-4b18-a455-71915264f644" containerName="mariadb-database-create" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.072168 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2204982c-c8aa-4b18-a455-71915264f644" containerName="mariadb-database-create" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.072186 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48a8f95-9236-458f-a8ab-fb15f6878172" containerName="mariadb-account-create-update" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.076421 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.095532 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.117509 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2d5zx"] Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.118895 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.120937 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-97ndb" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.121228 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.121394 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.157992 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2d5zx"] Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248119 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248196 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248230 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248259 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248323 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z9vm\" (UniqueName: \"kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248367 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248394 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248456 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmftd\" (UniqueName: \"kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248500 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.349696 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350010 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350061 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z9vm\" (UniqueName: \"kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350092 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350114 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350158 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmftd\" (UniqueName: \"kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350188 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350225 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350252 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.351121 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.352603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.353762 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.354024 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.354865 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.359710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.359997 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.362177 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.370607 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z9vm\" (UniqueName: \"kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.376889 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmftd\" (UniqueName: \"kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.402567 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.440274 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.897978 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.956505 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2d5zx"] Mar 20 08:51:06 crc kubenswrapper[5136]: W0320 08:51:06.969025 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a2341fa_02fc_4b08_a2a4_2272078db5d9.slice/crio-c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6 WatchSource:0}: Error finding container c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6: Status 404 returned error can't find the container with id c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6 Mar 20 08:51:07 crc kubenswrapper[5136]: I0320 08:51:07.777499 5136 generic.go:334] "Generic (PLEG): container finished" podID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerID="174d06d5a4cd8a3ee5fe8c3756254a01a6a8554baf9bae2be57775301d65bd05" exitCode=0 Mar 20 08:51:07 crc kubenswrapper[5136]: I0320 08:51:07.777547 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" event={"ID":"2902cdfa-3695-49ec-a36d-73082b9aa5a5","Type":"ContainerDied","Data":"174d06d5a4cd8a3ee5fe8c3756254a01a6a8554baf9bae2be57775301d65bd05"} Mar 20 08:51:07 crc kubenswrapper[5136]: I0320 08:51:07.777848 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" event={"ID":"2902cdfa-3695-49ec-a36d-73082b9aa5a5","Type":"ContainerStarted","Data":"3724ba25b3c3d5b60071b8d78e6fb6e8e43e3c7f75f11f016def345af42800c4"} Mar 20 08:51:07 crc kubenswrapper[5136]: I0320 08:51:07.779522 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2d5zx" event={"ID":"6a2341fa-02fc-4b08-a2a4-2272078db5d9","Type":"ContainerStarted","Data":"c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6"} Mar 20 08:51:08 crc kubenswrapper[5136]: I0320 08:51:08.855645 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" event={"ID":"2902cdfa-3695-49ec-a36d-73082b9aa5a5","Type":"ContainerStarted","Data":"11cce7a508814881b536262c59bb79c78ef540e64c7ff86205cc1f7942262b6d"} Mar 20 08:51:08 crc kubenswrapper[5136]: I0320 08:51:08.856244 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:08 crc kubenswrapper[5136]: I0320 08:51:08.880131 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" podStartSLOduration=2.880108205 podStartE2EDuration="2.880108205s" podCreationTimestamp="2026-03-20 08:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:08.876255936 +0000 UTC m=+7301.135567087" watchObservedRunningTime="2026-03-20 08:51:08.880108205 +0000 UTC m=+7301.139419356" Mar 20 08:51:11 crc kubenswrapper[5136]: I0320 08:51:11.879146 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2d5zx" event={"ID":"6a2341fa-02fc-4b08-a2a4-2272078db5d9","Type":"ContainerStarted","Data":"ab11099fc5fbb9ac6e0c34feae9a40a9addc504e685cccc5bc8ac39fbfb3793c"} Mar 20 08:51:11 crc kubenswrapper[5136]: I0320 08:51:11.904509 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2d5zx" podStartSLOduration=1.974774254 podStartE2EDuration="5.90448774s" podCreationTimestamp="2026-03-20 08:51:06 +0000 UTC" firstStartedPulling="2026-03-20 08:51:06.971988089 +0000 UTC m=+7299.231299230" lastFinishedPulling="2026-03-20 08:51:10.901701575 +0000 UTC m=+7303.161012716" observedRunningTime="2026-03-20 08:51:11.895583965 +0000 UTC m=+7304.154895146" watchObservedRunningTime="2026-03-20 08:51:11.90448774 +0000 UTC m=+7304.163798891" Mar 20 08:51:12 crc kubenswrapper[5136]: I0320 08:51:12.892221 5136 generic.go:334] "Generic (PLEG): container finished" podID="6a2341fa-02fc-4b08-a2a4-2272078db5d9" containerID="ab11099fc5fbb9ac6e0c34feae9a40a9addc504e685cccc5bc8ac39fbfb3793c" exitCode=0 Mar 20 08:51:12 crc kubenswrapper[5136]: I0320 08:51:12.892261 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2d5zx" event={"ID":"6a2341fa-02fc-4b08-a2a4-2272078db5d9","Type":"ContainerDied","Data":"ab11099fc5fbb9ac6e0c34feae9a40a9addc504e685cccc5bc8ac39fbfb3793c"} Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.337203 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467314 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z9vm\" (UniqueName: \"kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm\") pod \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467652 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs\") pod \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467736 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data\") pod \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467762 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle\") pod \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467916 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts\") pod \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467960 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs" (OuterVolumeSpecName: "logs") pod "6a2341fa-02fc-4b08-a2a4-2272078db5d9" (UID: "6a2341fa-02fc-4b08-a2a4-2272078db5d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.468399 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.475261 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts" (OuterVolumeSpecName: "scripts") pod "6a2341fa-02fc-4b08-a2a4-2272078db5d9" (UID: "6a2341fa-02fc-4b08-a2a4-2272078db5d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.487101 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm" (OuterVolumeSpecName: "kube-api-access-2z9vm") pod "6a2341fa-02fc-4b08-a2a4-2272078db5d9" (UID: "6a2341fa-02fc-4b08-a2a4-2272078db5d9"). InnerVolumeSpecName "kube-api-access-2z9vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.494788 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a2341fa-02fc-4b08-a2a4-2272078db5d9" (UID: "6a2341fa-02fc-4b08-a2a4-2272078db5d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.501471 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data" (OuterVolumeSpecName: "config-data") pod "6a2341fa-02fc-4b08-a2a4-2272078db5d9" (UID: "6a2341fa-02fc-4b08-a2a4-2272078db5d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.572668 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z9vm\" (UniqueName: \"kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.572731 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.572744 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.572757 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.911919 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2d5zx" event={"ID":"6a2341fa-02fc-4b08-a2a4-2272078db5d9","Type":"ContainerDied","Data":"c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6"} Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.911966 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.912065 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.136909 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 08:51:15 crc kubenswrapper[5136]: E0320 08:51:15.137643 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2341fa-02fc-4b08-a2a4-2272078db5d9" containerName="placement-db-sync" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.137691 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2341fa-02fc-4b08-a2a4-2272078db5d9" containerName="placement-db-sync" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.138154 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2341fa-02fc-4b08-a2a4-2272078db5d9" containerName="placement-db-sync" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.140389 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.148362 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.150737 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.150894 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-97ndb" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.151209 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.151972 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.152150 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.285781 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.286120 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.286231 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.286593 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.286704 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.287005 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.287207 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4l7c\" (UniqueName: \"kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.388688 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.388743 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.388795 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.388892 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4l7c\" (UniqueName: \"kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.388976 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.389005 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.389033 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.389465 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.392663 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.392864 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.393188 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.396528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.396735 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.408470 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4l7c\" (UniqueName: \"kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.464601 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.822325 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.822708 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.998498 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 08:51:16 crc kubenswrapper[5136]: W0320 08:51:16.005929 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7db26f77_c83b_4eb6_b513_6b0b2be6ebeb.slice/crio-102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255 WatchSource:0}: Error finding container 102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255: Status 404 returned error can't find the container with id 102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255 Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.412774 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.485147 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.485904 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="dnsmasq-dns" containerID="cri-o://0ebc36c7db7ba6a4fd167019fee7e33605b31fe2474b4deaee19fad2dbe59690" gracePeriod=10 Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.940622 5136 generic.go:334] "Generic (PLEG): container finished" podID="dde4894e-af50-4137-8cb4-469a0363b248" containerID="0ebc36c7db7ba6a4fd167019fee7e33605b31fe2474b4deaee19fad2dbe59690" exitCode=0 Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.940710 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" event={"ID":"dde4894e-af50-4137-8cb4-469a0363b248","Type":"ContainerDied","Data":"0ebc36c7db7ba6a4fd167019fee7e33605b31fe2474b4deaee19fad2dbe59690"} Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.943349 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerStarted","Data":"9849b01109acdf6259a4119de8fce764d067a8b917a7d5b6965b6bd00e1aa60a"} Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.943392 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerStarted","Data":"b1983019e3cc484fe8f15d4854d502ecd0a69d384bd0f1cd05cd048f9cc159a0"} Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.943403 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerStarted","Data":"102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255"} Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.943905 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.943953 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.974689 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-674ffbb556-dfk75" podStartSLOduration=1.9746687939999998 podStartE2EDuration="1.974668794s" podCreationTimestamp="2026-03-20 08:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:16.965929763 +0000 UTC m=+7309.225240954" watchObservedRunningTime="2026-03-20 08:51:16.974668794 +0000 UTC m=+7309.233979955" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.046389 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.226561 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb\") pod \"dde4894e-af50-4137-8cb4-469a0363b248\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.226635 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb\") pod \"dde4894e-af50-4137-8cb4-469a0363b248\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.226692 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc\") pod \"dde4894e-af50-4137-8cb4-469a0363b248\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.226874 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config\") pod \"dde4894e-af50-4137-8cb4-469a0363b248\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.226905 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkqpm\" (UniqueName: \"kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm\") pod \"dde4894e-af50-4137-8cb4-469a0363b248\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.238974 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm" (OuterVolumeSpecName: "kube-api-access-xkqpm") pod "dde4894e-af50-4137-8cb4-469a0363b248" (UID: "dde4894e-af50-4137-8cb4-469a0363b248"). InnerVolumeSpecName "kube-api-access-xkqpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.268223 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config" (OuterVolumeSpecName: "config") pod "dde4894e-af50-4137-8cb4-469a0363b248" (UID: "dde4894e-af50-4137-8cb4-469a0363b248"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.285214 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dde4894e-af50-4137-8cb4-469a0363b248" (UID: "dde4894e-af50-4137-8cb4-469a0363b248"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.289192 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dde4894e-af50-4137-8cb4-469a0363b248" (UID: "dde4894e-af50-4137-8cb4-469a0363b248"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.298551 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dde4894e-af50-4137-8cb4-469a0363b248" (UID: "dde4894e-af50-4137-8cb4-469a0363b248"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.329073 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.329107 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkqpm\" (UniqueName: \"kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.329119 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.329190 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.329200 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.953673 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" event={"ID":"dde4894e-af50-4137-8cb4-469a0363b248","Type":"ContainerDied","Data":"82b5c6a87292a840c7626e0d30263221c4e7065fd0f2c1e7d5b5956e5a5ba695"} Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.953725 5136 scope.go:117] "RemoveContainer" containerID="0ebc36c7db7ba6a4fd167019fee7e33605b31fe2474b4deaee19fad2dbe59690" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.953859 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.990914 5136 scope.go:117] "RemoveContainer" containerID="38d62cb7f741d8ee572a6c46fb9a977b9c469e6392e17bbc74b7fe94c516a377" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.994159 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:51:18 crc kubenswrapper[5136]: I0320 08:51:18.001277 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:51:18 crc kubenswrapper[5136]: I0320 08:51:18.412868 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde4894e-af50-4137-8cb4-469a0363b248" path="/var/lib/kubelet/pods/dde4894e-af50-4137-8cb4-469a0363b248/volumes" Mar 20 08:51:45 crc kubenswrapper[5136]: I0320 08:51:45.822434 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:51:45 crc kubenswrapper[5136]: I0320 08:51:45.823044 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:51:45 crc kubenswrapper[5136]: I0320 08:51:45.823102 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:51:45 crc kubenswrapper[5136]: I0320 08:51:45.823887 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:51:45 crc kubenswrapper[5136]: I0320 08:51:45.823952 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62" gracePeriod=600 Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.195219 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62" exitCode=0 Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.195268 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62"} Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.195327 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2"} Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.195378 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.518902 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.545892 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.129722 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566612-gmnm6"] Mar 20 08:52:00 crc kubenswrapper[5136]: E0320 08:52:00.130618 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="dnsmasq-dns" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.130631 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="dnsmasq-dns" Mar 20 08:52:00 crc kubenswrapper[5136]: E0320 08:52:00.130642 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="init" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.130648 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="init" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.130854 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="dnsmasq-dns" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.131410 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.133960 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.134188 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.134551 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.136893 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566612-gmnm6"] Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.231628 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvwh\" (UniqueName: \"kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh\") pod \"auto-csr-approver-29566612-gmnm6\" (UID: \"474fd165-50ec-4d02-9f52-eb18382cee27\") " pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.333358 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvvwh\" (UniqueName: \"kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh\") pod \"auto-csr-approver-29566612-gmnm6\" (UID: \"474fd165-50ec-4d02-9f52-eb18382cee27\") " pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.352833 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvvwh\" (UniqueName: \"kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh\") pod \"auto-csr-approver-29566612-gmnm6\" (UID: \"474fd165-50ec-4d02-9f52-eb18382cee27\") " pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.479064 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.891368 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566612-gmnm6"] Mar 20 08:52:00 crc kubenswrapper[5136]: W0320 08:52:00.895793 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod474fd165_50ec_4d02_9f52_eb18382cee27.slice/crio-9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8 WatchSource:0}: Error finding container 9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8: Status 404 returned error can't find the container with id 9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8 Mar 20 08:52:01 crc kubenswrapper[5136]: I0320 08:52:01.323249 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" event={"ID":"474fd165-50ec-4d02-9f52-eb18382cee27","Type":"ContainerStarted","Data":"9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8"} Mar 20 08:52:03 crc kubenswrapper[5136]: I0320 08:52:03.348587 5136 generic.go:334] "Generic (PLEG): container finished" podID="474fd165-50ec-4d02-9f52-eb18382cee27" containerID="02bf2fddb0787ba56f7a7d4d2929f25e0b16aff46d2b34aac1bc69f87f328612" exitCode=0 Mar 20 08:52:03 crc kubenswrapper[5136]: I0320 08:52:03.348670 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" event={"ID":"474fd165-50ec-4d02-9f52-eb18382cee27","Type":"ContainerDied","Data":"02bf2fddb0787ba56f7a7d4d2929f25e0b16aff46d2b34aac1bc69f87f328612"} Mar 20 08:52:04 crc kubenswrapper[5136]: I0320 08:52:04.664675 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:04 crc kubenswrapper[5136]: I0320 08:52:04.717949 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvvwh\" (UniqueName: \"kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh\") pod \"474fd165-50ec-4d02-9f52-eb18382cee27\" (UID: \"474fd165-50ec-4d02-9f52-eb18382cee27\") " Mar 20 08:52:04 crc kubenswrapper[5136]: I0320 08:52:04.723674 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh" (OuterVolumeSpecName: "kube-api-access-pvvwh") pod "474fd165-50ec-4d02-9f52-eb18382cee27" (UID: "474fd165-50ec-4d02-9f52-eb18382cee27"). InnerVolumeSpecName "kube-api-access-pvvwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:04 crc kubenswrapper[5136]: I0320 08:52:04.819659 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvvwh\" (UniqueName: \"kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:05 crc kubenswrapper[5136]: I0320 08:52:05.367660 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" event={"ID":"474fd165-50ec-4d02-9f52-eb18382cee27","Type":"ContainerDied","Data":"9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8"} Mar 20 08:52:05 crc kubenswrapper[5136]: I0320 08:52:05.367977 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8" Mar 20 08:52:05 crc kubenswrapper[5136]: I0320 08:52:05.367744 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:05 crc kubenswrapper[5136]: I0320 08:52:05.742422 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566606-rdf48"] Mar 20 08:52:05 crc kubenswrapper[5136]: I0320 08:52:05.752687 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566606-rdf48"] Mar 20 08:52:06 crc kubenswrapper[5136]: I0320 08:52:06.409585 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="895f2400-9932-4967-831f-f047de8c0f63" path="/var/lib/kubelet/pods/895f2400-9932-4967-831f-f047de8c0f63/volumes" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.270430 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-plxtl"] Mar 20 08:52:08 crc kubenswrapper[5136]: E0320 08:52:08.271104 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474fd165-50ec-4d02-9f52-eb18382cee27" containerName="oc" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.271116 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="474fd165-50ec-4d02-9f52-eb18382cee27" containerName="oc" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.271465 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="474fd165-50ec-4d02-9f52-eb18382cee27" containerName="oc" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.272076 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.277606 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-plxtl"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.354478 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-hkzk7"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.355788 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.364054 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hkzk7"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.381724 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8fv7\" (UniqueName: \"kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.381832 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.470289 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c7dc-account-create-update-6rchx"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.472008 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.478228 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.483689 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.483889 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lps9b\" (UniqueName: \"kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.483967 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.484032 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8fv7\" (UniqueName: \"kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.484662 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.490633 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-6rchx"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.508395 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8fv7\" (UniqueName: \"kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.573679 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-m289f"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.574701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.580960 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m289f"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.586344 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cpxs\" (UniqueName: \"kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.586397 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.586442 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lps9b\" (UniqueName: \"kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.586488 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.587322 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.589746 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.614730 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lps9b\" (UniqueName: \"kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.683196 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.686854 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-bp9vg"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.687847 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.687928 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.687988 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvk5z\" (UniqueName: \"kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.688098 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cpxs\" (UniqueName: \"kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.688124 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.688951 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.690938 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.712174 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-bp9vg"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.713842 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cpxs\" (UniqueName: \"kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.788552 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.789504 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkrtm\" (UniqueName: \"kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.789635 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.789676 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvk5z\" (UniqueName: \"kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.789737 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.790806 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.813012 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvk5z\" (UniqueName: \"kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.880995 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e664-account-create-update-42278"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.882146 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.891150 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.891177 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-42278"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.892276 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkrtm\" (UniqueName: \"kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.892397 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.893099 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.898287 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.915247 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkrtm\" (UniqueName: \"kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.995247 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4vdm\" (UniqueName: \"kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.995418 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.091658 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-plxtl"] Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.097549 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.097654 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4vdm\" (UniqueName: \"kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.098786 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.106397 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.116556 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4vdm\" (UniqueName: \"kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.222562 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.295668 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hkzk7"] Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.387995 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-6rchx"] Mar 20 08:52:09 crc kubenswrapper[5136]: W0320 08:52:09.399918 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60ddf395_2544_4ebe_b1e2_37321af6438e.slice/crio-49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434 WatchSource:0}: Error finding container 49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434: Status 404 returned error can't find the container with id 49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434 Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.427368 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hkzk7" event={"ID":"d573f1ae-c37f-487a-a059-5200647084d4","Type":"ContainerStarted","Data":"b26967ed75a461b0b6b84a2af08132666c292fa50f89626a54d371c7b7fd4406"} Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.430399 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-plxtl" event={"ID":"901ef065-f425-4ab7-b726-7d98704a58f8","Type":"ContainerStarted","Data":"c40db219321d83dc20c2ac8a7868d46f48eda3e65baae9eafe26d50e1df17298"} Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.430515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-plxtl" event={"ID":"901ef065-f425-4ab7-b726-7d98704a58f8","Type":"ContainerStarted","Data":"130bfcae1dadc538ccba696a9309fd26a26d10c1831017abf58931cd6bcfc9d4"} Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.456103 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-plxtl" podStartSLOduration=1.456085183 podStartE2EDuration="1.456085183s" podCreationTimestamp="2026-03-20 08:52:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:09.450316074 +0000 UTC m=+7361.709627215" watchObservedRunningTime="2026-03-20 08:52:09.456085183 +0000 UTC m=+7361.715396334" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.516357 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m289f"] Mar 20 08:52:09 crc kubenswrapper[5136]: W0320 08:52:09.516708 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4d3e02e_0f46_48dd_b9ef_8cb0135eabb4.slice/crio-cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3 WatchSource:0}: Error finding container cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3: Status 404 returned error can't find the container with id cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3 Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.676889 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-bp9vg"] Mar 20 08:52:09 crc kubenswrapper[5136]: W0320 08:52:09.696556 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda725d785_3630_4adc_8417_15fceaecb250.slice/crio-f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1 WatchSource:0}: Error finding container f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1: Status 404 returned error can't find the container with id f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1 Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.780267 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-42278"] Mar 20 08:52:09 crc kubenswrapper[5136]: W0320 08:52:09.781142 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d18b334_bb20_43b9_8322_c2e847b74703.slice/crio-afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7 WatchSource:0}: Error finding container afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7: Status 404 returned error can't find the container with id afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.439076 5136 generic.go:334] "Generic (PLEG): container finished" podID="901ef065-f425-4ab7-b726-7d98704a58f8" containerID="c40db219321d83dc20c2ac8a7868d46f48eda3e65baae9eafe26d50e1df17298" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.439262 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-plxtl" event={"ID":"901ef065-f425-4ab7-b726-7d98704a58f8","Type":"ContainerDied","Data":"c40db219321d83dc20c2ac8a7868d46f48eda3e65baae9eafe26d50e1df17298"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.441328 5136 generic.go:334] "Generic (PLEG): container finished" podID="7d18b334-bb20-43b9-8322-c2e847b74703" containerID="f77e438e3702b6de098fbd305814d9a4eb3df2f7161e741a8bd1bf247fc8becb" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.441473 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e664-account-create-update-42278" event={"ID":"7d18b334-bb20-43b9-8322-c2e847b74703","Type":"ContainerDied","Data":"f77e438e3702b6de098fbd305814d9a4eb3df2f7161e741a8bd1bf247fc8becb"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.441503 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e664-account-create-update-42278" event={"ID":"7d18b334-bb20-43b9-8322-c2e847b74703","Type":"ContainerStarted","Data":"afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.442835 5136 generic.go:334] "Generic (PLEG): container finished" podID="a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" containerID="6de711a276196e50b3e83c58fdab583ad6f7407fccb722557535f82c9abd51a7" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.442904 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m289f" event={"ID":"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4","Type":"ContainerDied","Data":"6de711a276196e50b3e83c58fdab583ad6f7407fccb722557535f82c9abd51a7"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.442930 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m289f" event={"ID":"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4","Type":"ContainerStarted","Data":"cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.444522 5136 generic.go:334] "Generic (PLEG): container finished" podID="d573f1ae-c37f-487a-a059-5200647084d4" containerID="cee664fd2e2a84523a4d0f3b3405435f0b03db0425ff048065d98c5612016681" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.444581 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hkzk7" event={"ID":"d573f1ae-c37f-487a-a059-5200647084d4","Type":"ContainerDied","Data":"cee664fd2e2a84523a4d0f3b3405435f0b03db0425ff048065d98c5612016681"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.446320 5136 generic.go:334] "Generic (PLEG): container finished" podID="a725d785-3630-4adc-8417-15fceaecb250" containerID="62941df7329d036b75c1f4c804a7915f68955eff793a634ef29d9182d34a9d9d" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.446375 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" event={"ID":"a725d785-3630-4adc-8417-15fceaecb250","Type":"ContainerDied","Data":"62941df7329d036b75c1f4c804a7915f68955eff793a634ef29d9182d34a9d9d"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.446394 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" event={"ID":"a725d785-3630-4adc-8417-15fceaecb250","Type":"ContainerStarted","Data":"f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.448643 5136 generic.go:334] "Generic (PLEG): container finished" podID="60ddf395-2544-4ebe-b1e2-37321af6438e" containerID="d695b9c2dbcf5b99f4e58724aa314335827d63b932809de7ba7a6c3af214ccca" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.448834 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c7dc-account-create-update-6rchx" event={"ID":"60ddf395-2544-4ebe-b1e2-37321af6438e","Type":"ContainerDied","Data":"d695b9c2dbcf5b99f4e58724aa314335827d63b932809de7ba7a6c3af214ccca"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.448962 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c7dc-account-create-update-6rchx" event={"ID":"60ddf395-2544-4ebe-b1e2-37321af6438e","Type":"ContainerStarted","Data":"49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434"} Mar 20 08:52:11 crc kubenswrapper[5136]: I0320 08:52:11.872399 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:11 crc kubenswrapper[5136]: I0320 08:52:11.958199 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8fv7\" (UniqueName: \"kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7\") pod \"901ef065-f425-4ab7-b726-7d98704a58f8\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " Mar 20 08:52:11 crc kubenswrapper[5136]: I0320 08:52:11.958331 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts\") pod \"901ef065-f425-4ab7-b726-7d98704a58f8\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " Mar 20 08:52:11 crc kubenswrapper[5136]: I0320 08:52:11.959030 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "901ef065-f425-4ab7-b726-7d98704a58f8" (UID: "901ef065-f425-4ab7-b726-7d98704a58f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:11 crc kubenswrapper[5136]: I0320 08:52:11.964025 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7" (OuterVolumeSpecName: "kube-api-access-n8fv7") pod "901ef065-f425-4ab7-b726-7d98704a58f8" (UID: "901ef065-f425-4ab7-b726-7d98704a58f8"). InnerVolumeSpecName "kube-api-access-n8fv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.039136 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.047941 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.058493 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.060171 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8fv7\" (UniqueName: \"kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.061092 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.072229 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.089666 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162144 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts\") pod \"a725d785-3630-4adc-8417-15fceaecb250\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162413 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvk5z\" (UniqueName: \"kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z\") pod \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162553 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4vdm\" (UniqueName: \"kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm\") pod \"7d18b334-bb20-43b9-8322-c2e847b74703\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162626 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts\") pod \"d573f1ae-c37f-487a-a059-5200647084d4\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lps9b\" (UniqueName: \"kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b\") pod \"d573f1ae-c37f-487a-a059-5200647084d4\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162773 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts\") pod \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162871 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkrtm\" (UniqueName: \"kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm\") pod \"a725d785-3630-4adc-8417-15fceaecb250\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162941 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts\") pod \"60ddf395-2544-4ebe-b1e2-37321af6438e\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163036 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts\") pod \"7d18b334-bb20-43b9-8322-c2e847b74703\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162978 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a725d785-3630-4adc-8417-15fceaecb250" (UID: "a725d785-3630-4adc-8417-15fceaecb250"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163184 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cpxs\" (UniqueName: \"kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs\") pod \"60ddf395-2544-4ebe-b1e2-37321af6438e\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163349 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" (UID: "a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163456 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60ddf395-2544-4ebe-b1e2-37321af6438e" (UID: "60ddf395-2544-4ebe-b1e2-37321af6438e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163682 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d18b334-bb20-43b9-8322-c2e847b74703" (UID: "7d18b334-bb20-43b9-8322-c2e847b74703"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163677 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d573f1ae-c37f-487a-a059-5200647084d4" (UID: "d573f1ae-c37f-487a-a059-5200647084d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.164215 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.164236 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.164249 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.164261 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.164272 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.165584 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm" (OuterVolumeSpecName: "kube-api-access-b4vdm") pod "7d18b334-bb20-43b9-8322-c2e847b74703" (UID: "7d18b334-bb20-43b9-8322-c2e847b74703"). InnerVolumeSpecName "kube-api-access-b4vdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.165638 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm" (OuterVolumeSpecName: "kube-api-access-lkrtm") pod "a725d785-3630-4adc-8417-15fceaecb250" (UID: "a725d785-3630-4adc-8417-15fceaecb250"). InnerVolumeSpecName "kube-api-access-lkrtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.166275 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs" (OuterVolumeSpecName: "kube-api-access-7cpxs") pod "60ddf395-2544-4ebe-b1e2-37321af6438e" (UID: "60ddf395-2544-4ebe-b1e2-37321af6438e"). InnerVolumeSpecName "kube-api-access-7cpxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.172434 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b" (OuterVolumeSpecName: "kube-api-access-lps9b") pod "d573f1ae-c37f-487a-a059-5200647084d4" (UID: "d573f1ae-c37f-487a-a059-5200647084d4"). InnerVolumeSpecName "kube-api-access-lps9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.172712 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z" (OuterVolumeSpecName: "kube-api-access-mvk5z") pod "a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" (UID: "a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4"). InnerVolumeSpecName "kube-api-access-mvk5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.265616 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cpxs\" (UniqueName: \"kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.265652 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvk5z\" (UniqueName: \"kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.265663 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4vdm\" (UniqueName: \"kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.265672 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lps9b\" (UniqueName: \"kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.265681 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkrtm\" (UniqueName: \"kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.492093 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.492095 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e664-account-create-update-42278" event={"ID":"7d18b334-bb20-43b9-8322-c2e847b74703","Type":"ContainerDied","Data":"afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.492213 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.499340 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m289f" event={"ID":"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4","Type":"ContainerDied","Data":"cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.499377 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.499702 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.501424 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.501446 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hkzk7" event={"ID":"d573f1ae-c37f-487a-a059-5200647084d4","Type":"ContainerDied","Data":"b26967ed75a461b0b6b84a2af08132666c292fa50f89626a54d371c7b7fd4406"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.501474 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26967ed75a461b0b6b84a2af08132666c292fa50f89626a54d371c7b7fd4406" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.504528 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.505046 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" event={"ID":"a725d785-3630-4adc-8417-15fceaecb250","Type":"ContainerDied","Data":"f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.505091 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.506747 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.506783 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c7dc-account-create-update-6rchx" event={"ID":"60ddf395-2544-4ebe-b1e2-37321af6438e","Type":"ContainerDied","Data":"49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.506889 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.509459 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-plxtl" event={"ID":"901ef065-f425-4ab7-b726-7d98704a58f8","Type":"ContainerDied","Data":"130bfcae1dadc538ccba696a9309fd26a26d10c1831017abf58931cd6bcfc9d4"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.509487 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="130bfcae1dadc538ccba696a9309fd26a26d10c1831017abf58931cd6bcfc9d4" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.509501 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.073663 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4m6bk"] Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074296 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a725d785-3630-4adc-8417-15fceaecb250" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074311 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a725d785-3630-4adc-8417-15fceaecb250" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074327 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d18b334-bb20-43b9-8322-c2e847b74703" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074333 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d18b334-bb20-43b9-8322-c2e847b74703" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074343 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ddf395-2544-4ebe-b1e2-37321af6438e" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074348 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ddf395-2544-4ebe-b1e2-37321af6438e" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074359 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901ef065-f425-4ab7-b726-7d98704a58f8" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074368 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="901ef065-f425-4ab7-b726-7d98704a58f8" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074381 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d573f1ae-c37f-487a-a059-5200647084d4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074388 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d573f1ae-c37f-487a-a059-5200647084d4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074415 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074421 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074563 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074579 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ddf395-2544-4ebe-b1e2-37321af6438e" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074587 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="901ef065-f425-4ab7-b726-7d98704a58f8" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074599 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a725d785-3630-4adc-8417-15fceaecb250" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074608 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d573f1ae-c37f-487a-a059-5200647084d4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074613 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d18b334-bb20-43b9-8322-c2e847b74703" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.075167 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.077718 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nn865" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.077932 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.078235 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.120992 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4m6bk"] Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.209943 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.210028 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.210073 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.210119 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72gh2\" (UniqueName: \"kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.312620 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72gh2\" (UniqueName: \"kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.312761 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.312861 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.312943 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.319849 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.320151 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.321479 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.341522 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72gh2\" (UniqueName: \"kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.401592 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.898135 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4m6bk"] Mar 20 08:52:15 crc kubenswrapper[5136]: I0320 08:52:15.540937 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" event={"ID":"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2","Type":"ContainerStarted","Data":"2a31760774dbb636e652192217a1a8550c415372b7e0c26014f90746e934f305"} Mar 20 08:52:24 crc kubenswrapper[5136]: I0320 08:52:24.624355 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" event={"ID":"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2","Type":"ContainerStarted","Data":"7b5e974893ba339d7d465bcbfaf4888d7a35fa993cb39d96df7bf3535b3e030c"} Mar 20 08:52:29 crc kubenswrapper[5136]: I0320 08:52:29.660988 5136 generic.go:334] "Generic (PLEG): container finished" podID="7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" containerID="7b5e974893ba339d7d465bcbfaf4888d7a35fa993cb39d96df7bf3535b3e030c" exitCode=0 Mar 20 08:52:29 crc kubenswrapper[5136]: I0320 08:52:29.661085 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" event={"ID":"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2","Type":"ContainerDied","Data":"7b5e974893ba339d7d465bcbfaf4888d7a35fa993cb39d96df7bf3535b3e030c"} Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.029687 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.138647 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts\") pod \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.138750 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data\") pod \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.138810 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle\") pod \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.138994 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72gh2\" (UniqueName: \"kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2\") pod \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.144716 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts" (OuterVolumeSpecName: "scripts") pod "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" (UID: "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.147624 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2" (OuterVolumeSpecName: "kube-api-access-72gh2") pod "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" (UID: "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2"). InnerVolumeSpecName "kube-api-access-72gh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.171494 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data" (OuterVolumeSpecName: "config-data") pod "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" (UID: "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.172003 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" (UID: "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.240289 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72gh2\" (UniqueName: \"kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.240327 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.240337 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.240348 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.684292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" event={"ID":"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2","Type":"ContainerDied","Data":"2a31760774dbb636e652192217a1a8550c415372b7e0c26014f90746e934f305"} Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.684368 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.684392 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a31760774dbb636e652192217a1a8550c415372b7e0c26014f90746e934f305" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.831563 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 08:52:31 crc kubenswrapper[5136]: E0320 08:52:31.832003 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" containerName="nova-cell0-conductor-db-sync" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.832019 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" containerName="nova-cell0-conductor-db-sync" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.832187 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" containerName="nova-cell0-conductor-db-sync" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.832806 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.834711 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.837984 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nn865" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.847936 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.854300 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.854591 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.854794 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjh89\" (UniqueName: \"kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.956042 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjh89\" (UniqueName: \"kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.956142 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.956193 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.962002 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.962381 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.976272 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjh89\" (UniqueName: \"kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:32 crc kubenswrapper[5136]: I0320 08:52:32.206681 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:32 crc kubenswrapper[5136]: I0320 08:52:32.688990 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 08:52:32 crc kubenswrapper[5136]: W0320 08:52:32.696892 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11508a60_8214_4811_898f_9542eee208d5.slice/crio-a3d117a77c0748e946444e66ceaf44864c92bbd1f42406d2aed51d06c0feb90d WatchSource:0}: Error finding container a3d117a77c0748e946444e66ceaf44864c92bbd1f42406d2aed51d06c0feb90d: Status 404 returned error can't find the container with id a3d117a77c0748e946444e66ceaf44864c92bbd1f42406d2aed51d06c0feb90d Mar 20 08:52:33 crc kubenswrapper[5136]: I0320 08:52:33.706292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"11508a60-8214-4811-898f-9542eee208d5","Type":"ContainerStarted","Data":"2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c"} Mar 20 08:52:33 crc kubenswrapper[5136]: I0320 08:52:33.706774 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:33 crc kubenswrapper[5136]: I0320 08:52:33.706791 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"11508a60-8214-4811-898f-9542eee208d5","Type":"ContainerStarted","Data":"a3d117a77c0748e946444e66ceaf44864c92bbd1f42406d2aed51d06c0feb90d"} Mar 20 08:52:33 crc kubenswrapper[5136]: I0320 08:52:33.729296 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.72927874 podStartE2EDuration="2.72927874s" podCreationTimestamp="2026-03-20 08:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:33.721477408 +0000 UTC m=+7385.980788559" watchObservedRunningTime="2026-03-20 08:52:33.72927874 +0000 UTC m=+7385.988589891" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.232346 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.715153 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mdczc"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.718284 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.745220 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mdczc"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.757268 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.757379 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.860597 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.863150 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.868743 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872035 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5phn\" (UniqueName: \"kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872104 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872143 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872225 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872305 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9f4s\" (UniqueName: \"kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.886249 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.893978 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.895505 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.905120 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.910716 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976200 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9f4s\" (UniqueName: \"kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976255 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5phn\" (UniqueName: \"kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976285 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976302 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976318 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976340 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976401 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.993894 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:37.995413 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:37.997899 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.003279 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.004103 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.011677 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.012522 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.013619 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.027103 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9f4s\" (UniqueName: \"kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.027499 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5phn\" (UniqueName: \"kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.035890 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.077732 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.077789 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.077881 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw4wt\" (UniqueName: \"kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.087079 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.089741 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.091620 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.094502 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.104581 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.186726 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw4wt\" (UniqueName: \"kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.186879 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.186911 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w25hx\" (UniqueName: \"kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.187071 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.187158 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.187194 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.187218 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.188471 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.189018 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.192098 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.201550 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.204512 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.210547 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.216089 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw4wt\" (UniqueName: \"kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.220364 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.310751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.310803 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w25hx\" (UniqueName: \"kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.310895 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.310927 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.310963 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dls8d\" (UniqueName: \"kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311000 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311035 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311050 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311081 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311116 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311133 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cn8k\" (UniqueName: \"kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311169 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.322665 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.344728 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.359472 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.360305 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w25hx\" (UniqueName: \"kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447566 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447624 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447645 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cn8k\" (UniqueName: \"kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447678 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447705 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447748 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447779 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447807 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dls8d\" (UniqueName: \"kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447881 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.457111 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.466379 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.466987 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.471690 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.471847 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.471938 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.475365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.488318 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dls8d\" (UniqueName: \"kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.504094 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cn8k\" (UniqueName: \"kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.553141 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.639308 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.662842 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.929222 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mdczc"] Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.043567 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:52:39 crc kubenswrapper[5136]: W0320 08:52:39.044969 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb609af52_e8bb_4279_b472_39d6e572932e.slice/crio-53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6 WatchSource:0}: Error finding container 53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6: Status 404 returned error can't find the container with id 53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6 Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.053512 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.106016 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bknwr"] Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.107359 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.113006 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.113385 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.118789 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bknwr"] Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.167642 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.168079 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.168166 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsjd8\" (UniqueName: \"kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.168243 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: W0320 08:52:39.189562 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09516972_60d9_4cd7_96c6_adf48041a2bb.slice/crio-c13ae57bbf638b75ff104c2d84bc60b50a61972caea54b71ddd8d960d7e36e25 WatchSource:0}: Error finding container c13ae57bbf638b75ff104c2d84bc60b50a61972caea54b71ddd8d960d7e36e25: Status 404 returned error can't find the container with id c13ae57bbf638b75ff104c2d84bc60b50a61972caea54b71ddd8d960d7e36e25 Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.194131 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.270593 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.270758 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsjd8\" (UniqueName: \"kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.270936 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.271119 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.276637 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.285651 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.289658 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsjd8\" (UniqueName: \"kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.295911 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.304917 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:39 crc kubenswrapper[5136]: W0320 08:52:39.309654 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ceb13df_eb0b_4512_aabe_6be6a1ee8631.slice/crio-1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9 WatchSource:0}: Error finding container 1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9: Status 404 returned error can't find the container with id 1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9 Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.321255 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:52:39 crc kubenswrapper[5136]: W0320 08:52:39.331526 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda53d28b6_bc47_4aa3_a413_3716651dc331.slice/crio-3a00b31f5b938544931b8ec8a179f9b8845385e169e4d748a404beb81b702299 WatchSource:0}: Error finding container 3a00b31f5b938544931b8ec8a179f9b8845385e169e4d748a404beb81b702299: Status 404 returned error can't find the container with id 3a00b31f5b938544931b8ec8a179f9b8845385e169e4d748a404beb81b702299 Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.363145 5136 scope.go:117] "RemoveContainer" containerID="ee7fc0aa7d70c450967fddf706c56fe4af54a2ede94af9ae1aa1f75f2c772efc" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.469982 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.843627 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mdczc" event={"ID":"0869b44d-0a1b-47ae-9836-8940a31bfcf3","Type":"ContainerStarted","Data":"ed9ea6d3f8369f00e748cdbd6f737c4f4b838eb8db7325e29aaf558dc66f2d6f"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.843956 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mdczc" event={"ID":"0869b44d-0a1b-47ae-9836-8940a31bfcf3","Type":"ContainerStarted","Data":"5c9a7d2b6bd777c84e35c9869b44075a2797d9ec171923d54abd4043df22cad2"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.850565 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerStarted","Data":"c13ae57bbf638b75ff104c2d84bc60b50a61972caea54b71ddd8d960d7e36e25"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.852862 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b609af52-e8bb-4279-b472-39d6e572932e","Type":"ContainerStarted","Data":"53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.854794 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55055431-47a0-4022-a32a-5b2b1ef303ac","Type":"ContainerStarted","Data":"62d977f140d52d185ad8e335d0e34478d3fe4528e116c9595298eb659df62cab"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.856947 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerStarted","Data":"1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.859300 5136 generic.go:334] "Generic (PLEG): container finished" podID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerID="e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2" exitCode=0 Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.859352 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" event={"ID":"a53d28b6-bc47-4aa3-a413-3716651dc331","Type":"ContainerDied","Data":"e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.859380 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" event={"ID":"a53d28b6-bc47-4aa3-a413-3716651dc331","Type":"ContainerStarted","Data":"3a00b31f5b938544931b8ec8a179f9b8845385e169e4d748a404beb81b702299"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.865929 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mdczc" podStartSLOduration=2.865906832 podStartE2EDuration="2.865906832s" podCreationTimestamp="2026-03-20 08:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:39.861705651 +0000 UTC m=+7392.121016822" watchObservedRunningTime="2026-03-20 08:52:39.865906832 +0000 UTC m=+7392.125217983" Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.039882 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bknwr"] Mar 20 08:52:40 crc kubenswrapper[5136]: W0320 08:52:40.052512 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc10383e2_004c_458c_922b_dd13574f12ff.slice/crio-4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649 WatchSource:0}: Error finding container 4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649: Status 404 returned error can't find the container with id 4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649 Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.869406 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" event={"ID":"a53d28b6-bc47-4aa3-a413-3716651dc331","Type":"ContainerStarted","Data":"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d"} Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.872417 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.877684 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bknwr" event={"ID":"c10383e2-004c-458c-922b-dd13574f12ff","Type":"ContainerStarted","Data":"036a56dc8be2b1448ecd4eaee7ae6cfc9fce54b35893d14784a5e2a194d245a2"} Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.877720 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bknwr" event={"ID":"c10383e2-004c-458c-922b-dd13574f12ff","Type":"ContainerStarted","Data":"4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649"} Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.908504 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" podStartSLOduration=2.9084849200000003 podStartE2EDuration="2.90848492s" podCreationTimestamp="2026-03-20 08:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:40.894979362 +0000 UTC m=+7393.154290513" watchObservedRunningTime="2026-03-20 08:52:40.90848492 +0000 UTC m=+7393.167796071" Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.917193 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bknwr" podStartSLOduration=1.917172329 podStartE2EDuration="1.917172329s" podCreationTimestamp="2026-03-20 08:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:40.908911163 +0000 UTC m=+7393.168222314" watchObservedRunningTime="2026-03-20 08:52:40.917172329 +0000 UTC m=+7393.176483500" Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.287667 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.324675 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.893956 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerStarted","Data":"f82be34aae682e8c29705602a5f53ed7c575b686a407aea8e3a4986b123ff8de"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.894000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerStarted","Data":"429dc8682418b5dd369adad65e15254f5660e5ac47e728d920acf519227996a5"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.894069 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-metadata" containerID="cri-o://f82be34aae682e8c29705602a5f53ed7c575b686a407aea8e3a4986b123ff8de" gracePeriod=30 Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.894045 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-log" containerID="cri-o://429dc8682418b5dd369adad65e15254f5660e5ac47e728d920acf519227996a5" gracePeriod=30 Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.897123 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerStarted","Data":"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.897172 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerStarted","Data":"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.900446 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b609af52-e8bb-4279-b472-39d6e572932e","Type":"ContainerStarted","Data":"997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.900601 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b609af52-e8bb-4279-b472-39d6e572932e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82" gracePeriod=30 Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.904156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55055431-47a0-4022-a32a-5b2b1ef303ac","Type":"ContainerStarted","Data":"332a221e0ff579954272eaf6146f26f2dac553c0d82d235c294584854974af51"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.917423 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.27445702 podStartE2EDuration="4.917406417s" podCreationTimestamp="2026-03-20 08:52:38 +0000 UTC" firstStartedPulling="2026-03-20 08:52:39.31612679 +0000 UTC m=+7391.575437941" lastFinishedPulling="2026-03-20 08:52:41.959076187 +0000 UTC m=+7394.218387338" observedRunningTime="2026-03-20 08:52:42.910368939 +0000 UTC m=+7395.169680110" watchObservedRunningTime="2026-03-20 08:52:42.917406417 +0000 UTC m=+7395.176717568" Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.932727 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.035846693 podStartE2EDuration="5.932707481s" podCreationTimestamp="2026-03-20 08:52:37 +0000 UTC" firstStartedPulling="2026-03-20 08:52:39.051209108 +0000 UTC m=+7391.310520259" lastFinishedPulling="2026-03-20 08:52:41.948069906 +0000 UTC m=+7394.207381047" observedRunningTime="2026-03-20 08:52:42.928269484 +0000 UTC m=+7395.187580645" watchObservedRunningTime="2026-03-20 08:52:42.932707481 +0000 UTC m=+7395.192018642" Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.956435 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.199406427 podStartE2EDuration="5.956417125s" podCreationTimestamp="2026-03-20 08:52:37 +0000 UTC" firstStartedPulling="2026-03-20 08:52:39.191313716 +0000 UTC m=+7391.450624867" lastFinishedPulling="2026-03-20 08:52:41.948324414 +0000 UTC m=+7394.207635565" observedRunningTime="2026-03-20 08:52:42.945083214 +0000 UTC m=+7395.204394365" watchObservedRunningTime="2026-03-20 08:52:42.956417125 +0000 UTC m=+7395.215728276" Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.965396 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.070733054 podStartE2EDuration="5.965375873s" podCreationTimestamp="2026-03-20 08:52:37 +0000 UTC" firstStartedPulling="2026-03-20 08:52:39.052600401 +0000 UTC m=+7391.311911552" lastFinishedPulling="2026-03-20 08:52:41.94724322 +0000 UTC m=+7394.206554371" observedRunningTime="2026-03-20 08:52:42.963339009 +0000 UTC m=+7395.222650160" watchObservedRunningTime="2026-03-20 08:52:42.965375873 +0000 UTC m=+7395.224687024" Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.189431 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.221561 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913681 5136 generic.go:334] "Generic (PLEG): container finished" podID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerID="f82be34aae682e8c29705602a5f53ed7c575b686a407aea8e3a4986b123ff8de" exitCode=0 Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913714 5136 generic.go:334] "Generic (PLEG): container finished" podID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerID="429dc8682418b5dd369adad65e15254f5660e5ac47e728d920acf519227996a5" exitCode=143 Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913781 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerDied","Data":"f82be34aae682e8c29705602a5f53ed7c575b686a407aea8e3a4986b123ff8de"} Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913878 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerDied","Data":"429dc8682418b5dd369adad65e15254f5660e5ac47e728d920acf519227996a5"} Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913889 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerDied","Data":"1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9"} Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913925 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9" Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.915337 5136 generic.go:334] "Generic (PLEG): container finished" podID="c10383e2-004c-458c-922b-dd13574f12ff" containerID="036a56dc8be2b1448ecd4eaee7ae6cfc9fce54b35893d14784a5e2a194d245a2" exitCode=0 Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.915366 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bknwr" event={"ID":"c10383e2-004c-458c-922b-dd13574f12ff","Type":"ContainerDied","Data":"036a56dc8be2b1448ecd4eaee7ae6cfc9fce54b35893d14784a5e2a194d245a2"} Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.947514 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.071134 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dls8d\" (UniqueName: \"kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d\") pod \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.071549 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data\") pod \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.071625 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs\") pod \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.071703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle\") pod \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.072322 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs" (OuterVolumeSpecName: "logs") pod "3ceb13df-eb0b-4512-aabe-6be6a1ee8631" (UID: "3ceb13df-eb0b-4512-aabe-6be6a1ee8631"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.092240 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d" (OuterVolumeSpecName: "kube-api-access-dls8d") pod "3ceb13df-eb0b-4512-aabe-6be6a1ee8631" (UID: "3ceb13df-eb0b-4512-aabe-6be6a1ee8631"). InnerVolumeSpecName "kube-api-access-dls8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.095852 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data" (OuterVolumeSpecName: "config-data") pod "3ceb13df-eb0b-4512-aabe-6be6a1ee8631" (UID: "3ceb13df-eb0b-4512-aabe-6be6a1ee8631"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.106558 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ceb13df-eb0b-4512-aabe-6be6a1ee8631" (UID: "3ceb13df-eb0b-4512-aabe-6be6a1ee8631"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.173548 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.173754 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dls8d\" (UniqueName: \"kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.173769 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.173779 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.927921 5136 generic.go:334] "Generic (PLEG): container finished" podID="0869b44d-0a1b-47ae-9836-8940a31bfcf3" containerID="ed9ea6d3f8369f00e748cdbd6f737c4f4b838eb8db7325e29aaf558dc66f2d6f" exitCode=0 Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.928022 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.927975 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mdczc" event={"ID":"0869b44d-0a1b-47ae-9836-8940a31bfcf3","Type":"ContainerDied","Data":"ed9ea6d3f8369f00e748cdbd6f737c4f4b838eb8db7325e29aaf558dc66f2d6f"} Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.990278 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.001444 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.012115 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:45 crc kubenswrapper[5136]: E0320 08:52:45.012775 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-log" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.012806 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-log" Mar 20 08:52:45 crc kubenswrapper[5136]: E0320 08:52:45.013044 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-metadata" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.013065 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-metadata" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.013330 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-metadata" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.013375 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-log" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.015006 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.024806 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.025090 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.035943 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.193545 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.193947 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.194131 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.194177 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.194212 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.286285 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.295331 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.295631 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.296193 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.296344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.296432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.296775 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.317735 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.318537 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.319752 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.329007 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.350074 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.402914 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsjd8\" (UniqueName: \"kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8\") pod \"c10383e2-004c-458c-922b-dd13574f12ff\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.403069 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle\") pod \"c10383e2-004c-458c-922b-dd13574f12ff\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.403110 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data\") pod \"c10383e2-004c-458c-922b-dd13574f12ff\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.403185 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts\") pod \"c10383e2-004c-458c-922b-dd13574f12ff\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.407610 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts" (OuterVolumeSpecName: "scripts") pod "c10383e2-004c-458c-922b-dd13574f12ff" (UID: "c10383e2-004c-458c-922b-dd13574f12ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.408306 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8" (OuterVolumeSpecName: "kube-api-access-nsjd8") pod "c10383e2-004c-458c-922b-dd13574f12ff" (UID: "c10383e2-004c-458c-922b-dd13574f12ff"). InnerVolumeSpecName "kube-api-access-nsjd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.428355 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c10383e2-004c-458c-922b-dd13574f12ff" (UID: "c10383e2-004c-458c-922b-dd13574f12ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.432080 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data" (OuterVolumeSpecName: "config-data") pod "c10383e2-004c-458c-922b-dd13574f12ff" (UID: "c10383e2-004c-458c-922b-dd13574f12ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.505292 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsjd8\" (UniqueName: \"kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.505639 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.505652 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.505663 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:45 crc kubenswrapper[5136]: W0320 08:52:45.777832 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6713f83b_29eb_4f81_a24c_fbc604bce554.slice/crio-7c5086e883b6c4d88baa490a424515f21d222c4b5586728c1daed8a11d7670ee WatchSource:0}: Error finding container 7c5086e883b6c4d88baa490a424515f21d222c4b5586728c1daed8a11d7670ee: Status 404 returned error can't find the container with id 7c5086e883b6c4d88baa490a424515f21d222c4b5586728c1daed8a11d7670ee Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.782359 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.951522 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerStarted","Data":"7c5086e883b6c4d88baa490a424515f21d222c4b5586728c1daed8a11d7670ee"} Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.959863 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.962673 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bknwr" event={"ID":"c10383e2-004c-458c-922b-dd13574f12ff","Type":"ContainerDied","Data":"4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649"} Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.962725 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.030527 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 08:52:46 crc kubenswrapper[5136]: E0320 08:52:46.031027 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10383e2-004c-458c-922b-dd13574f12ff" containerName="nova-cell1-conductor-db-sync" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.031050 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10383e2-004c-458c-922b-dd13574f12ff" containerName="nova-cell1-conductor-db-sync" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.031268 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10383e2-004c-458c-922b-dd13574f12ff" containerName="nova-cell1-conductor-db-sync" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.032324 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.036459 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.053355 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.139479 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.139554 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92bl4\" (UniqueName: \"kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.139622 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.241210 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.241312 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.241374 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92bl4\" (UniqueName: \"kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.244898 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.251534 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.256207 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92bl4\" (UniqueName: \"kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.344037 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.360032 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.409884 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" path="/var/lib/kubelet/pods/3ceb13df-eb0b-4512-aabe-6be6a1ee8631/volumes" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.443285 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data\") pod \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.443649 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle\") pod \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.443708 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5phn\" (UniqueName: \"kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn\") pod \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.443797 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts\") pod \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.448322 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn" (OuterVolumeSpecName: "kube-api-access-f5phn") pod "0869b44d-0a1b-47ae-9836-8940a31bfcf3" (UID: "0869b44d-0a1b-47ae-9836-8940a31bfcf3"). InnerVolumeSpecName "kube-api-access-f5phn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.452025 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts" (OuterVolumeSpecName: "scripts") pod "0869b44d-0a1b-47ae-9836-8940a31bfcf3" (UID: "0869b44d-0a1b-47ae-9836-8940a31bfcf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.476387 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data" (OuterVolumeSpecName: "config-data") pod "0869b44d-0a1b-47ae-9836-8940a31bfcf3" (UID: "0869b44d-0a1b-47ae-9836-8940a31bfcf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.481968 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0869b44d-0a1b-47ae-9836-8940a31bfcf3" (UID: "0869b44d-0a1b-47ae-9836-8940a31bfcf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.546369 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.546406 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.546419 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.546433 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5phn\" (UniqueName: \"kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.823491 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 08:52:46 crc kubenswrapper[5136]: W0320 08:52:46.825654 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea7881c5_b719_41b0_8046_249f7fdb6f61.slice/crio-0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567 WatchSource:0}: Error finding container 0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567: Status 404 returned error can't find the container with id 0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567 Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.979065 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.979564 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mdczc" event={"ID":"0869b44d-0a1b-47ae-9836-8940a31bfcf3","Type":"ContainerDied","Data":"5c9a7d2b6bd777c84e35c9869b44075a2797d9ec171923d54abd4043df22cad2"} Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.980574 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c9a7d2b6bd777c84e35c9869b44075a2797d9ec171923d54abd4043df22cad2" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.990965 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea7881c5-b719-41b0-8046-249f7fdb6f61","Type":"ContainerStarted","Data":"0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567"} Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.998546 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerStarted","Data":"d2b7b8267a2421e84ce96b443fe40959edd3068e2febff192b19ffb01873d453"} Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.998591 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerStarted","Data":"96278179e56c5d2348b8c914cce3f120d3445bc4fb5c442e6054e1df59c3b3ea"} Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.029839 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.029804968 podStartE2EDuration="3.029804968s" podCreationTimestamp="2026-03-20 08:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:47.018468297 +0000 UTC m=+7399.277779468" watchObservedRunningTime="2026-03-20 08:52:47.029804968 +0000 UTC m=+7399.289116119" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.155783 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.156008 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-log" containerID="cri-o://42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" gracePeriod=30 Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.156197 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-api" containerID="cri-o://1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" gracePeriod=30 Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.169618 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.169869 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="55055431-47a0-4022-a32a-5b2b1ef303ac" containerName="nova-scheduler-scheduler" containerID="cri-o://332a221e0ff579954272eaf6146f26f2dac553c0d82d235c294584854974af51" gracePeriod=30 Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.181450 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.742808 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.776600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle\") pod \"09516972-60d9-4cd7-96c6-adf48041a2bb\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.776715 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data\") pod \"09516972-60d9-4cd7-96c6-adf48041a2bb\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.776828 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w25hx\" (UniqueName: \"kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx\") pod \"09516972-60d9-4cd7-96c6-adf48041a2bb\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.776905 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs\") pod \"09516972-60d9-4cd7-96c6-adf48041a2bb\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.777437 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs" (OuterVolumeSpecName: "logs") pod "09516972-60d9-4cd7-96c6-adf48041a2bb" (UID: "09516972-60d9-4cd7-96c6-adf48041a2bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.782759 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx" (OuterVolumeSpecName: "kube-api-access-w25hx") pod "09516972-60d9-4cd7-96c6-adf48041a2bb" (UID: "09516972-60d9-4cd7-96c6-adf48041a2bb"). InnerVolumeSpecName "kube-api-access-w25hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.801366 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09516972-60d9-4cd7-96c6-adf48041a2bb" (UID: "09516972-60d9-4cd7-96c6-adf48041a2bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.802103 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data" (OuterVolumeSpecName: "config-data") pod "09516972-60d9-4cd7-96c6-adf48041a2bb" (UID: "09516972-60d9-4cd7-96c6-adf48041a2bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.879206 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.879894 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w25hx\" (UniqueName: \"kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.879937 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.879949 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.008174 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea7881c5-b719-41b0-8046-249f7fdb6f61","Type":"ContainerStarted","Data":"5df9d903ae57ec8baad2fe6c51be0e13f0c8a558bfc5471ea6ef07feb8e164f7"} Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.008304 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010301 5136 generic.go:334] "Generic (PLEG): container finished" podID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerID="1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" exitCode=0 Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010349 5136 generic.go:334] "Generic (PLEG): container finished" podID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerID="42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" exitCode=143 Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010882 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010942 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerDied","Data":"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279"} Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010972 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerDied","Data":"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82"} Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerDied","Data":"c13ae57bbf638b75ff104c2d84bc60b50a61972caea54b71ddd8d960d7e36e25"} Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.011011 5136 scope.go:117] "RemoveContainer" containerID="1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.026173 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.026153225 podStartE2EDuration="3.026153225s" podCreationTimestamp="2026-03-20 08:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:48.022735049 +0000 UTC m=+7400.282046200" watchObservedRunningTime="2026-03-20 08:52:48.026153225 +0000 UTC m=+7400.285464376" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.040306 5136 scope.go:117] "RemoveContainer" containerID="42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.060802 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.064499 5136 scope.go:117] "RemoveContainer" containerID="1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" Mar 20 08:52:48 crc kubenswrapper[5136]: E0320 08:52:48.065748 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279\": container with ID starting with 1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279 not found: ID does not exist" containerID="1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.065784 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279"} err="failed to get container status \"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279\": rpc error: code = NotFound desc = could not find container \"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279\": container with ID starting with 1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279 not found: ID does not exist" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.065825 5136 scope.go:117] "RemoveContainer" containerID="42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" Mar 20 08:52:48 crc kubenswrapper[5136]: E0320 08:52:48.066034 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82\": container with ID starting with 42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82 not found: ID does not exist" containerID="42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066057 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82"} err="failed to get container status \"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82\": rpc error: code = NotFound desc = could not find container \"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82\": container with ID starting with 42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82 not found: ID does not exist" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066075 5136 scope.go:117] "RemoveContainer" containerID="1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066261 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279"} err="failed to get container status \"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279\": rpc error: code = NotFound desc = could not find container \"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279\": container with ID starting with 1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279 not found: ID does not exist" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066285 5136 scope.go:117] "RemoveContainer" containerID="42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066469 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82"} err="failed to get container status \"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82\": rpc error: code = NotFound desc = could not find container \"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82\": container with ID starting with 42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82 not found: ID does not exist" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066822 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.086138 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:48 crc kubenswrapper[5136]: E0320 08:52:48.087567 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-log" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.087735 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-log" Mar 20 08:52:48 crc kubenswrapper[5136]: E0320 08:52:48.087861 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-api" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.087992 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-api" Mar 20 08:52:48 crc kubenswrapper[5136]: E0320 08:52:48.088142 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0869b44d-0a1b-47ae-9836-8940a31bfcf3" containerName="nova-manage" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.088234 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0869b44d-0a1b-47ae-9836-8940a31bfcf3" containerName="nova-manage" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.088998 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-api" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.089112 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-log" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.089214 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0869b44d-0a1b-47ae-9836-8940a31bfcf3" containerName="nova-manage" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.091274 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.104230 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.132625 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.187596 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rb5h\" (UniqueName: \"kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.187910 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.187933 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.187968 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.290284 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.290337 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.290397 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.290487 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rb5h\" (UniqueName: \"kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.291236 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.297239 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.298785 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.307281 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rb5h\" (UniqueName: \"kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.412335 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" path="/var/lib/kubelet/pods/09516972-60d9-4cd7-96c6-adf48041a2bb/volumes" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.438967 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.664931 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.735356 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.735646 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="dnsmasq-dns" containerID="cri-o://11cce7a508814881b536262c59bb79c78ef540e64c7ff86205cc1f7942262b6d" gracePeriod=10 Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.883785 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:48 crc kubenswrapper[5136]: W0320 08:52:48.884367 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod579d7134_2752_49f9_b511_ec4c1c43e855.slice/crio-cf32985f1285ef1aad7b8fc54ab7da07f587890e27cb5380195236bfc72ef06e WatchSource:0}: Error finding container cf32985f1285ef1aad7b8fc54ab7da07f587890e27cb5380195236bfc72ef06e: Status 404 returned error can't find the container with id cf32985f1285ef1aad7b8fc54ab7da07f587890e27cb5380195236bfc72ef06e Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.025062 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerStarted","Data":"cf32985f1285ef1aad7b8fc54ab7da07f587890e27cb5380195236bfc72ef06e"} Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.030269 5136 generic.go:334] "Generic (PLEG): container finished" podID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerID="11cce7a508814881b536262c59bb79c78ef540e64c7ff86205cc1f7942262b6d" exitCode=0 Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.030352 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" event={"ID":"2902cdfa-3695-49ec-a36d-73082b9aa5a5","Type":"ContainerDied","Data":"11cce7a508814881b536262c59bb79c78ef540e64c7ff86205cc1f7942262b6d"} Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.030483 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-log" containerID="cri-o://d2b7b8267a2421e84ce96b443fe40959edd3068e2febff192b19ffb01873d453" gracePeriod=30 Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.030585 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-metadata" containerID="cri-o://96278179e56c5d2348b8c914cce3f120d3445bc4fb5c442e6054e1df59c3b3ea" gracePeriod=30 Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.335776 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.528190 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb\") pod \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.528254 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc\") pod \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.528316 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmftd\" (UniqueName: \"kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd\") pod \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.528367 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb\") pod \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.528890 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config\") pod \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.550114 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd" (OuterVolumeSpecName: "kube-api-access-mmftd") pod "2902cdfa-3695-49ec-a36d-73082b9aa5a5" (UID: "2902cdfa-3695-49ec-a36d-73082b9aa5a5"). InnerVolumeSpecName "kube-api-access-mmftd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.631707 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmftd\" (UniqueName: \"kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.682758 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2902cdfa-3695-49ec-a36d-73082b9aa5a5" (UID: "2902cdfa-3695-49ec-a36d-73082b9aa5a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.687069 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2902cdfa-3695-49ec-a36d-73082b9aa5a5" (UID: "2902cdfa-3695-49ec-a36d-73082b9aa5a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.704630 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2902cdfa-3695-49ec-a36d-73082b9aa5a5" (UID: "2902cdfa-3695-49ec-a36d-73082b9aa5a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.709190 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config" (OuterVolumeSpecName: "config") pod "2902cdfa-3695-49ec-a36d-73082b9aa5a5" (UID: "2902cdfa-3695-49ec-a36d-73082b9aa5a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.733016 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.733045 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.733055 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.733065 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.040628 5136 generic.go:334] "Generic (PLEG): container finished" podID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerID="96278179e56c5d2348b8c914cce3f120d3445bc4fb5c442e6054e1df59c3b3ea" exitCode=0 Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.040662 5136 generic.go:334] "Generic (PLEG): container finished" podID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerID="d2b7b8267a2421e84ce96b443fe40959edd3068e2febff192b19ffb01873d453" exitCode=143 Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.040672 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerDied","Data":"96278179e56c5d2348b8c914cce3f120d3445bc4fb5c442e6054e1df59c3b3ea"} Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.040703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerDied","Data":"d2b7b8267a2421e84ce96b443fe40959edd3068e2febff192b19ffb01873d453"} Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.042292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerStarted","Data":"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed"} Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.042324 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerStarted","Data":"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0"} Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.043599 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" event={"ID":"2902cdfa-3695-49ec-a36d-73082b9aa5a5","Type":"ContainerDied","Data":"3724ba25b3c3d5b60071b8d78e6fb6e8e43e3c7f75f11f016def345af42800c4"} Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.043634 5136 scope.go:117] "RemoveContainer" containerID="11cce7a508814881b536262c59bb79c78ef540e64c7ff86205cc1f7942262b6d" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.043749 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.059170 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.065405 5136 scope.go:117] "RemoveContainer" containerID="174d06d5a4cd8a3ee5fe8c3756254a01a6a8554baf9bae2be57775301d65bd05" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.073301 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.073283275 podStartE2EDuration="2.073283275s" podCreationTimestamp="2026-03-20 08:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:50.068237388 +0000 UTC m=+7402.327548549" watchObservedRunningTime="2026-03-20 08:52:50.073283275 +0000 UTC m=+7402.332594426" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.123710 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.131885 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150491 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data\") pod \"6713f83b-29eb-4f81-a24c-fbc604bce554\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150561 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs\") pod \"6713f83b-29eb-4f81-a24c-fbc604bce554\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150644 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w\") pod \"6713f83b-29eb-4f81-a24c-fbc604bce554\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150668 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle\") pod \"6713f83b-29eb-4f81-a24c-fbc604bce554\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150729 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs\") pod \"6713f83b-29eb-4f81-a24c-fbc604bce554\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150791 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs" (OuterVolumeSpecName: "logs") pod "6713f83b-29eb-4f81-a24c-fbc604bce554" (UID: "6713f83b-29eb-4f81-a24c-fbc604bce554"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.151165 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.156376 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w" (OuterVolumeSpecName: "kube-api-access-swm2w") pod "6713f83b-29eb-4f81-a24c-fbc604bce554" (UID: "6713f83b-29eb-4f81-a24c-fbc604bce554"). InnerVolumeSpecName "kube-api-access-swm2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.174583 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data" (OuterVolumeSpecName: "config-data") pod "6713f83b-29eb-4f81-a24c-fbc604bce554" (UID: "6713f83b-29eb-4f81-a24c-fbc604bce554"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.183324 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6713f83b-29eb-4f81-a24c-fbc604bce554" (UID: "6713f83b-29eb-4f81-a24c-fbc604bce554"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.217024 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6713f83b-29eb-4f81-a24c-fbc604bce554" (UID: "6713f83b-29eb-4f81-a24c-fbc604bce554"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.251919 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.251955 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.251966 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.251975 5136 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.407126 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" path="/var/lib/kubelet/pods/2902cdfa-3695-49ec-a36d-73082b9aa5a5/volumes" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.053369 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerDied","Data":"7c5086e883b6c4d88baa490a424515f21d222c4b5586728c1daed8a11d7670ee"} Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.053396 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.053436 5136 scope.go:117] "RemoveContainer" containerID="96278179e56c5d2348b8c914cce3f120d3445bc4fb5c442e6054e1df59c3b3ea" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.153603 5136 scope.go:117] "RemoveContainer" containerID="d2b7b8267a2421e84ce96b443fe40959edd3068e2febff192b19ffb01873d453" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.157587 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.177300 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.188432 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:51 crc kubenswrapper[5136]: E0320 08:52:51.188804 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="dnsmasq-dns" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.188832 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="dnsmasq-dns" Mar 20 08:52:51 crc kubenswrapper[5136]: E0320 08:52:51.188846 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-log" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.188852 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-log" Mar 20 08:52:51 crc kubenswrapper[5136]: E0320 08:52:51.188866 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-metadata" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.188872 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-metadata" Mar 20 08:52:51 crc kubenswrapper[5136]: E0320 08:52:51.188886 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="init" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.188892 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="init" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.189046 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="dnsmasq-dns" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.189055 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-log" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.189077 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-metadata" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.189962 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.203320 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.210684 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.223483 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.371653 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.372448 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.372501 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.372552 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.372610 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls6gj\" (UniqueName: \"kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.387772 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.475669 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.475840 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.475912 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.475951 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls6gj\" (UniqueName: \"kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.476029 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.477502 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.480462 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.481411 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.491730 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.494484 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls6gj\" (UniqueName: \"kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.512741 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.946791 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:52 crc kubenswrapper[5136]: I0320 08:52:52.064613 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerStarted","Data":"7d0527f6839d134920c7511c9424a689979b1e2fa3991597ecf809ce71d7c929"} Mar 20 08:52:52 crc kubenswrapper[5136]: I0320 08:52:52.407474 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" path="/var/lib/kubelet/pods/6713f83b-29eb-4f81-a24c-fbc604bce554/volumes" Mar 20 08:52:53 crc kubenswrapper[5136]: I0320 08:52:53.077765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerStarted","Data":"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f"} Mar 20 08:52:53 crc kubenswrapper[5136]: I0320 08:52:53.078284 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerStarted","Data":"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229"} Mar 20 08:52:53 crc kubenswrapper[5136]: I0320 08:52:53.102282 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.102266773 podStartE2EDuration="2.102266773s" podCreationTimestamp="2026-03-20 08:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:53.101743717 +0000 UTC m=+7405.361054878" watchObservedRunningTime="2026-03-20 08:52:53.102266773 +0000 UTC m=+7405.361577924" Mar 20 08:52:58 crc kubenswrapper[5136]: I0320 08:52:58.440386 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:52:58 crc kubenswrapper[5136]: I0320 08:52:58.441026 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:52:59 crc kubenswrapper[5136]: I0320 08:52:59.523055 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:52:59 crc kubenswrapper[5136]: I0320 08:52:59.523059 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:01 crc kubenswrapper[5136]: I0320 08:53:01.513465 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 08:53:01 crc kubenswrapper[5136]: I0320 08:53:01.513840 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 08:53:02 crc kubenswrapper[5136]: I0320 08:53:02.528028 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:02 crc kubenswrapper[5136]: I0320 08:53:02.528076 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:06 crc kubenswrapper[5136]: I0320 08:53:06.439656 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:53:06 crc kubenswrapper[5136]: I0320 08:53:06.440120 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:53:09 crc kubenswrapper[5136]: I0320 08:53:09.513753 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 08:53:09 crc kubenswrapper[5136]: I0320 08:53:09.514039 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 08:53:09 crc kubenswrapper[5136]: I0320 08:53:09.522990 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:09 crc kubenswrapper[5136]: I0320 08:53:09.523038 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:12 crc kubenswrapper[5136]: I0320 08:53:12.522939 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:12 crc kubenswrapper[5136]: I0320 08:53:12.522939 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:13 crc kubenswrapper[5136]: E0320 08:53:13.031973 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb609af52_e8bb_4279_b472_39d6e572932e.slice/crio-conmon-997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb609af52_e8bb_4279_b472_39d6e572932e.slice/crio-997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82.scope\": RecentStats: unable to find data in memory cache]" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.271273 5136 generic.go:334] "Generic (PLEG): container finished" podID="b609af52-e8bb-4279-b472-39d6e572932e" containerID="997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82" exitCode=137 Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.271476 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b609af52-e8bb-4279-b472-39d6e572932e","Type":"ContainerDied","Data":"997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82"} Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.271627 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b609af52-e8bb-4279-b472-39d6e572932e","Type":"ContainerDied","Data":"53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6"} Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.271642 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.309189 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.433544 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle\") pod \"b609af52-e8bb-4279-b472-39d6e572932e\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.433619 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw4wt\" (UniqueName: \"kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt\") pod \"b609af52-e8bb-4279-b472-39d6e572932e\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.433787 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data\") pod \"b609af52-e8bb-4279-b472-39d6e572932e\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.441023 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt" (OuterVolumeSpecName: "kube-api-access-bw4wt") pod "b609af52-e8bb-4279-b472-39d6e572932e" (UID: "b609af52-e8bb-4279-b472-39d6e572932e"). InnerVolumeSpecName "kube-api-access-bw4wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.459621 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b609af52-e8bb-4279-b472-39d6e572932e" (UID: "b609af52-e8bb-4279-b472-39d6e572932e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.464362 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data" (OuterVolumeSpecName: "config-data") pod "b609af52-e8bb-4279-b472-39d6e572932e" (UID: "b609af52-e8bb-4279-b472-39d6e572932e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.539275 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.539312 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.539324 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw4wt\" (UniqueName: \"kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.279477 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.322457 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.331161 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.356752 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:53:14 crc kubenswrapper[5136]: E0320 08:53:14.357214 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b609af52-e8bb-4279-b472-39d6e572932e" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.357239 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b609af52-e8bb-4279-b472-39d6e572932e" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.358804 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b609af52-e8bb-4279-b472-39d6e572932e" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.359591 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.364996 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.367594 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.368291 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.381673 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.412766 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b609af52-e8bb-4279-b472-39d6e572932e" path="/var/lib/kubelet/pods/b609af52-e8bb-4279-b472-39d6e572932e/volumes" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.455031 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6j8r\" (UniqueName: \"kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.455083 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.455791 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.455970 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.456082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.557969 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.558025 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.558057 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.558148 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6j8r\" (UniqueName: \"kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.558183 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.563242 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.564323 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.565181 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.565963 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.577762 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6j8r\" (UniqueName: \"kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.686018 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:15 crc kubenswrapper[5136]: I0320 08:53:15.156351 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:53:15 crc kubenswrapper[5136]: I0320 08:53:15.287300 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65b4b8da-0eda-4a77-aeed-0a6f9350a942","Type":"ContainerStarted","Data":"80db3f58b1ebfbb9a5e2a7946e85f8a9a484a8c2e28fb3c3b16dbcc6876113ea"} Mar 20 08:53:16 crc kubenswrapper[5136]: I0320 08:53:16.296035 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65b4b8da-0eda-4a77-aeed-0a6f9350a942","Type":"ContainerStarted","Data":"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961"} Mar 20 08:53:16 crc kubenswrapper[5136]: I0320 08:53:16.316873 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.316848834 podStartE2EDuration="2.316848834s" podCreationTimestamp="2026-03-20 08:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:53:16.310703114 +0000 UTC m=+7428.570014255" watchObservedRunningTime="2026-03-20 08:53:16.316848834 +0000 UTC m=+7428.576159985" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.310743 5136 generic.go:334] "Generic (PLEG): container finished" podID="55055431-47a0-4022-a32a-5b2b1ef303ac" containerID="332a221e0ff579954272eaf6146f26f2dac553c0d82d235c294584854974af51" exitCode=137 Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.310857 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55055431-47a0-4022-a32a-5b2b1ef303ac","Type":"ContainerDied","Data":"332a221e0ff579954272eaf6146f26f2dac553c0d82d235c294584854974af51"} Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.601774 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.707189 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9f4s\" (UniqueName: \"kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s\") pod \"55055431-47a0-4022-a32a-5b2b1ef303ac\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.707467 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle\") pod \"55055431-47a0-4022-a32a-5b2b1ef303ac\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.707578 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data\") pod \"55055431-47a0-4022-a32a-5b2b1ef303ac\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.712031 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s" (OuterVolumeSpecName: "kube-api-access-t9f4s") pod "55055431-47a0-4022-a32a-5b2b1ef303ac" (UID: "55055431-47a0-4022-a32a-5b2b1ef303ac"). InnerVolumeSpecName "kube-api-access-t9f4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.751233 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55055431-47a0-4022-a32a-5b2b1ef303ac" (UID: "55055431-47a0-4022-a32a-5b2b1ef303ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.759644 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data" (OuterVolumeSpecName: "config-data") pod "55055431-47a0-4022-a32a-5b2b1ef303ac" (UID: "55055431-47a0-4022-a32a-5b2b1ef303ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.809886 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.809938 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.809956 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9f4s\" (UniqueName: \"kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.329878 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55055431-47a0-4022-a32a-5b2b1ef303ac","Type":"ContainerDied","Data":"62d977f140d52d185ad8e335d0e34478d3fe4528e116c9595298eb659df62cab"} Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.330129 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.330440 5136 scope.go:117] "RemoveContainer" containerID="332a221e0ff579954272eaf6146f26f2dac553c0d82d235c294584854974af51" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.368896 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.391205 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.416503 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55055431-47a0-4022-a32a-5b2b1ef303ac" path="/var/lib/kubelet/pods/55055431-47a0-4022-a32a-5b2b1ef303ac/volumes" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.417265 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:18 crc kubenswrapper[5136]: E0320 08:53:18.417618 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55055431-47a0-4022-a32a-5b2b1ef303ac" containerName="nova-scheduler-scheduler" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.417637 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="55055431-47a0-4022-a32a-5b2b1ef303ac" containerName="nova-scheduler-scheduler" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.417861 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="55055431-47a0-4022-a32a-5b2b1ef303ac" containerName="nova-scheduler-scheduler" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.418547 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.418640 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.423126 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.429629 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.429725 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.531302 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.532078 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.532271 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjhz\" (UniqueName: \"kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.535191 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.543353 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.635140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjjhz\" (UniqueName: \"kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.651250 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjjhz\" (UniqueName: \"kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.744476 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:53:19 crc kubenswrapper[5136]: I0320 08:53:19.164241 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:19 crc kubenswrapper[5136]: W0320 08:53:19.167412 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fbe2855_6fbb_40f0_bea7_43b853e673ba.slice/crio-06eb0e932ffea6d92e18230014df407dbddf53d2394ca0952871e976aa85a7c5 WatchSource:0}: Error finding container 06eb0e932ffea6d92e18230014df407dbddf53d2394ca0952871e976aa85a7c5: Status 404 returned error can't find the container with id 06eb0e932ffea6d92e18230014df407dbddf53d2394ca0952871e976aa85a7c5 Mar 20 08:53:19 crc kubenswrapper[5136]: I0320 08:53:19.338032 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fbe2855-6fbb-40f0-bea7-43b853e673ba","Type":"ContainerStarted","Data":"06eb0e932ffea6d92e18230014df407dbddf53d2394ca0952871e976aa85a7c5"} Mar 20 08:53:19 crc kubenswrapper[5136]: I0320 08:53:19.485030 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:19 crc kubenswrapper[5136]: I0320 08:53:19.485378 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:19 crc kubenswrapper[5136]: I0320 08:53:19.686185 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:20 crc kubenswrapper[5136]: I0320 08:53:20.350255 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fbe2855-6fbb-40f0-bea7-43b853e673ba","Type":"ContainerStarted","Data":"49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4"} Mar 20 08:53:20 crc kubenswrapper[5136]: I0320 08:53:20.372718 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.372695622 podStartE2EDuration="2.372695622s" podCreationTimestamp="2026-03-20 08:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:53:20.363115725 +0000 UTC m=+7432.622426876" watchObservedRunningTime="2026-03-20 08:53:20.372695622 +0000 UTC m=+7432.632006783" Mar 20 08:53:22 crc kubenswrapper[5136]: I0320 08:53:22.522960 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:22 crc kubenswrapper[5136]: I0320 08:53:22.522990 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:23 crc kubenswrapper[5136]: I0320 08:53:23.745491 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 08:53:24 crc kubenswrapper[5136]: I0320 08:53:24.687317 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:24 crc kubenswrapper[5136]: I0320 08:53:24.705683 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.416284 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.563966 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dqx9d"] Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.565069 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.567277 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.567346 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.577788 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqx9d"] Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.582295 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.582344 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.582699 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ngb7\" (UniqueName: \"kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.583062 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.684652 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ngb7\" (UniqueName: \"kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.685158 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.685243 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.685270 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.691388 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.704801 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.704932 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ngb7\" (UniqueName: \"kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.706353 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.883394 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:26 crc kubenswrapper[5136]: I0320 08:53:26.361334 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqx9d"] Mar 20 08:53:26 crc kubenswrapper[5136]: I0320 08:53:26.410210 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqx9d" event={"ID":"db04162b-4913-4acc-b387-d7324202a05b","Type":"ContainerStarted","Data":"d2ae94894fbea7420f6354e5d1a88eec38b5ad81604fa2329e2b4a94c19c8eee"} Mar 20 08:53:27 crc kubenswrapper[5136]: I0320 08:53:27.414570 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqx9d" event={"ID":"db04162b-4913-4acc-b387-d7324202a05b","Type":"ContainerStarted","Data":"be01a18339108a324f38d8991f30133c5afcdbbf8536a6fc62d20def93a4fe70"} Mar 20 08:53:27 crc kubenswrapper[5136]: I0320 08:53:27.434413 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dqx9d" podStartSLOduration=2.434392233 podStartE2EDuration="2.434392233s" podCreationTimestamp="2026-03-20 08:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:53:27.428611194 +0000 UTC m=+7439.687922335" watchObservedRunningTime="2026-03-20 08:53:27.434392233 +0000 UTC m=+7439.693703384" Mar 20 08:53:28 crc kubenswrapper[5136]: I0320 08:53:28.745095 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 08:53:28 crc kubenswrapper[5136]: I0320 08:53:28.775280 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 08:53:29 crc kubenswrapper[5136]: I0320 08:53:29.457470 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 08:53:29 crc kubenswrapper[5136]: I0320 08:53:29.522841 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:29 crc kubenswrapper[5136]: I0320 08:53:29.522900 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:31 crc kubenswrapper[5136]: I0320 08:53:31.450271 5136 generic.go:334] "Generic (PLEG): container finished" podID="db04162b-4913-4acc-b387-d7324202a05b" containerID="be01a18339108a324f38d8991f30133c5afcdbbf8536a6fc62d20def93a4fe70" exitCode=0 Mar 20 08:53:31 crc kubenswrapper[5136]: I0320 08:53:31.450364 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqx9d" event={"ID":"db04162b-4913-4acc-b387-d7324202a05b","Type":"ContainerDied","Data":"be01a18339108a324f38d8991f30133c5afcdbbf8536a6fc62d20def93a4fe70"} Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.522000 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.522764 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.783038 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.916508 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ngb7\" (UniqueName: \"kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7\") pod \"db04162b-4913-4acc-b387-d7324202a05b\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.916591 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data\") pod \"db04162b-4913-4acc-b387-d7324202a05b\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.916649 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts\") pod \"db04162b-4913-4acc-b387-d7324202a05b\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.916719 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle\") pod \"db04162b-4913-4acc-b387-d7324202a05b\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.921639 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts" (OuterVolumeSpecName: "scripts") pod "db04162b-4913-4acc-b387-d7324202a05b" (UID: "db04162b-4913-4acc-b387-d7324202a05b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.921746 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7" (OuterVolumeSpecName: "kube-api-access-6ngb7") pod "db04162b-4913-4acc-b387-d7324202a05b" (UID: "db04162b-4913-4acc-b387-d7324202a05b"). InnerVolumeSpecName "kube-api-access-6ngb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.941937 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db04162b-4913-4acc-b387-d7324202a05b" (UID: "db04162b-4913-4acc-b387-d7324202a05b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.945746 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data" (OuterVolumeSpecName: "config-data") pod "db04162b-4913-4acc-b387-d7324202a05b" (UID: "db04162b-4913-4acc-b387-d7324202a05b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.018462 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ngb7\" (UniqueName: \"kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.018496 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.018507 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.018516 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.468797 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqx9d" event={"ID":"db04162b-4913-4acc-b387-d7324202a05b","Type":"ContainerDied","Data":"d2ae94894fbea7420f6354e5d1a88eec38b5ad81604fa2329e2b4a94c19c8eee"} Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.469129 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ae94894fbea7420f6354e5d1a88eec38b5ad81604fa2329e2b4a94c19c8eee" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.468889 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.650754 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.651646 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" containerID="cri-o://96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0" gracePeriod=30 Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.652176 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" containerID="cri-o://32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed" gracePeriod=30 Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.666616 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.666882 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" containerID="cri-o://49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" gracePeriod=30 Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.734685 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.734979 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" containerID="cri-o://f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229" gracePeriod=30 Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.735118 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" containerID="cri-o://7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f" gracePeriod=30 Mar 20 08:53:33 crc kubenswrapper[5136]: E0320 08:53:33.747112 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:33 crc kubenswrapper[5136]: E0320 08:53:33.748607 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:33 crc kubenswrapper[5136]: E0320 08:53:33.749950 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:33 crc kubenswrapper[5136]: E0320 08:53:33.750010 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:34 crc kubenswrapper[5136]: I0320 08:53:34.477333 5136 generic.go:334] "Generic (PLEG): container finished" podID="579d7134-2752-49f9-b511-ec4c1c43e855" containerID="96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0" exitCode=143 Mar 20 08:53:34 crc kubenswrapper[5136]: I0320 08:53:34.477402 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerDied","Data":"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0"} Mar 20 08:53:34 crc kubenswrapper[5136]: I0320 08:53:34.479245 5136 generic.go:334] "Generic (PLEG): container finished" podID="b168d83c-bd4d-4187-915f-59b00d213a23" containerID="f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229" exitCode=143 Mar 20 08:53:34 crc kubenswrapper[5136]: I0320 08:53:34.479270 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerDied","Data":"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229"} Mar 20 08:53:38 crc kubenswrapper[5136]: E0320 08:53:38.747240 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:38 crc kubenswrapper[5136]: E0320 08:53:38.749230 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:38 crc kubenswrapper[5136]: E0320 08:53:38.750380 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:38 crc kubenswrapper[5136]: E0320 08:53:38.750408 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:43 crc kubenswrapper[5136]: E0320 08:53:43.755978 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:43 crc kubenswrapper[5136]: E0320 08:53:43.757874 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:43 crc kubenswrapper[5136]: E0320 08:53:43.758992 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:43 crc kubenswrapper[5136]: E0320 08:53:43.759035 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.414104 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.502669 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs\") pod \"579d7134-2752-49f9-b511-ec4c1c43e855\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.502722 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data\") pod \"579d7134-2752-49f9-b511-ec4c1c43e855\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.502772 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rb5h\" (UniqueName: \"kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h\") pod \"579d7134-2752-49f9-b511-ec4c1c43e855\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.502809 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle\") pod \"579d7134-2752-49f9-b511-ec4c1c43e855\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.503459 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs" (OuterVolumeSpecName: "logs") pod "579d7134-2752-49f9-b511-ec4c1c43e855" (UID: "579d7134-2752-49f9-b511-ec4c1c43e855"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.508333 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h" (OuterVolumeSpecName: "kube-api-access-8rb5h") pod "579d7134-2752-49f9-b511-ec4c1c43e855" (UID: "579d7134-2752-49f9-b511-ec4c1c43e855"). InnerVolumeSpecName "kube-api-access-8rb5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.526853 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "579d7134-2752-49f9-b511-ec4c1c43e855" (UID: "579d7134-2752-49f9-b511-ec4c1c43e855"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.533046 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.544185 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data" (OuterVolumeSpecName: "config-data") pod "579d7134-2752-49f9-b511-ec4c1c43e855" (UID: "579d7134-2752-49f9-b511-ec4c1c43e855"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.595330 5136 generic.go:334] "Generic (PLEG): container finished" podID="b168d83c-bd4d-4187-915f-59b00d213a23" containerID="7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f" exitCode=0 Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.595404 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.595422 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerDied","Data":"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f"} Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.595473 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerDied","Data":"7d0527f6839d134920c7511c9424a689979b1e2fa3991597ecf809ce71d7c929"} Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.595515 5136 scope.go:117] "RemoveContainer" containerID="7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.598317 5136 generic.go:334] "Generic (PLEG): container finished" podID="579d7134-2752-49f9-b511-ec4c1c43e855" containerID="32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed" exitCode=0 Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.598360 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerDied","Data":"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed"} Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.598392 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.598399 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerDied","Data":"cf32985f1285ef1aad7b8fc54ab7da07f587890e27cb5380195236bfc72ef06e"} Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.604557 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.604585 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.604597 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.604607 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rb5h\" (UniqueName: \"kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.617098 5136 scope.go:117] "RemoveContainer" containerID="f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.635046 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.659474 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.660596 5136 scope.go:117] "RemoveContainer" containerID="7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.662775 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f\": container with ID starting with 7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f not found: ID does not exist" containerID="7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.662826 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f"} err="failed to get container status \"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f\": rpc error: code = NotFound desc = could not find container \"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f\": container with ID starting with 7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f not found: ID does not exist" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.662849 5136 scope.go:117] "RemoveContainer" containerID="f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.663185 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229\": container with ID starting with f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229 not found: ID does not exist" containerID="f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.663216 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229"} err="failed to get container status \"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229\": rpc error: code = NotFound desc = could not find container \"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229\": container with ID starting with f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229 not found: ID does not exist" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.663234 5136 scope.go:117] "RemoveContainer" containerID="32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.669833 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.670289 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db04162b-4913-4acc-b387-d7324202a05b" containerName="nova-manage" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670308 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="db04162b-4913-4acc-b387-d7324202a05b" containerName="nova-manage" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.670322 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670329 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.670344 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670349 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.670360 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670366 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.670392 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670398 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670556 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670571 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670581 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="db04162b-4913-4acc-b387-d7324202a05b" containerName="nova-manage" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670593 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670602 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.671566 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.678263 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.682539 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.694025 5136 scope.go:117] "RemoveContainer" containerID="96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.705230 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data\") pod \"b168d83c-bd4d-4187-915f-59b00d213a23\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.705274 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls6gj\" (UniqueName: \"kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj\") pod \"b168d83c-bd4d-4187-915f-59b00d213a23\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.705316 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs\") pod \"b168d83c-bd4d-4187-915f-59b00d213a23\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.705394 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle\") pod \"b168d83c-bd4d-4187-915f-59b00d213a23\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.705432 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs\") pod \"b168d83c-bd4d-4187-915f-59b00d213a23\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.706570 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs" (OuterVolumeSpecName: "logs") pod "b168d83c-bd4d-4187-915f-59b00d213a23" (UID: "b168d83c-bd4d-4187-915f-59b00d213a23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.707460 5136 scope.go:117] "RemoveContainer" containerID="32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.707968 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed\": container with ID starting with 32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed not found: ID does not exist" containerID="32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.708004 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed"} err="failed to get container status \"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed\": rpc error: code = NotFound desc = could not find container \"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed\": container with ID starting with 32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed not found: ID does not exist" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.708028 5136 scope.go:117] "RemoveContainer" containerID="96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.708455 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0\": container with ID starting with 96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0 not found: ID does not exist" containerID="96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.708475 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0"} err="failed to get container status \"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0\": rpc error: code = NotFound desc = could not find container \"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0\": container with ID starting with 96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0 not found: ID does not exist" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.709172 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj" (OuterVolumeSpecName: "kube-api-access-ls6gj") pod "b168d83c-bd4d-4187-915f-59b00d213a23" (UID: "b168d83c-bd4d-4187-915f-59b00d213a23"). InnerVolumeSpecName "kube-api-access-ls6gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.726653 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b168d83c-bd4d-4187-915f-59b00d213a23" (UID: "b168d83c-bd4d-4187-915f-59b00d213a23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.730187 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data" (OuterVolumeSpecName: "config-data") pod "b168d83c-bd4d-4187-915f-59b00d213a23" (UID: "b168d83c-bd4d-4187-915f-59b00d213a23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807397 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807519 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807566 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807592 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x56v2\" (UniqueName: \"kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807946 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807985 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls6gj\" (UniqueName: \"kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807995 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.808005 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.909597 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.909761 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.909842 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.909878 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x56v2\" (UniqueName: \"kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.910311 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.912798 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.913009 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.021150 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b168d83c-bd4d-4187-915f-59b00d213a23" (UID: "b168d83c-bd4d-4187-915f-59b00d213a23"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.024139 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x56v2\" (UniqueName: \"kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.114522 5136 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.233862 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.244183 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.257015 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.258764 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.261763 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.262086 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.273630 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.297388 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.328582 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.328688 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.328990 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.329258 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvh5j\" (UniqueName: \"kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.329311 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.411484 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" path="/var/lib/kubelet/pods/579d7134-2752-49f9-b511-ec4c1c43e855/volumes" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.412761 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" path="/var/lib/kubelet/pods/b168d83c-bd4d-4187-915f-59b00d213a23/volumes" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.430572 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.430623 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.430686 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.430763 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvh5j\" (UniqueName: \"kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.430780 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.442576 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.443447 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.443482 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.444823 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.453455 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvh5j\" (UniqueName: \"kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.577074 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: E0320 08:53:48.746233 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:48 crc kubenswrapper[5136]: E0320 08:53:48.747831 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:48 crc kubenswrapper[5136]: E0320 08:53:48.749484 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:48 crc kubenswrapper[5136]: E0320 08:53:48.749522 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.804366 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.028950 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:49 crc kubenswrapper[5136]: W0320 08:53:49.035843 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfba581c3_e77a_4db7_ac50_bdb17291b2c7.slice/crio-a030a1b8ce6d7dd841b09834af231537b00b2434ef87a4f05635fba547adb80f WatchSource:0}: Error finding container a030a1b8ce6d7dd841b09834af231537b00b2434ef87a4f05635fba547adb80f: Status 404 returned error can't find the container with id a030a1b8ce6d7dd841b09834af231537b00b2434ef87a4f05635fba547adb80f Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.617960 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerStarted","Data":"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.618273 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerStarted","Data":"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.618292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerStarted","Data":"68f8bcef42fc4d6d03adc36790caeff5f36c1bbf1af4d1021355e12bffa62849"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.623385 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerStarted","Data":"1e2a347a5b7fd1a421ed6a7c665114567c818ec00d533ffab87b5e587c7ecf89"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.623416 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerStarted","Data":"cd5663a9b617be114b64e32a8582baab8d6015f76d7bc3afb172624a4c98b3c7"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.623428 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerStarted","Data":"a030a1b8ce6d7dd841b09834af231537b00b2434ef87a4f05635fba547adb80f"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.645590 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.645543489 podStartE2EDuration="2.645543489s" podCreationTimestamp="2026-03-20 08:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:53:49.634474006 +0000 UTC m=+7461.893785157" watchObservedRunningTime="2026-03-20 08:53:49.645543489 +0000 UTC m=+7461.904854640" Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.688177 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.688154907 podStartE2EDuration="1.688154907s" podCreationTimestamp="2026-03-20 08:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:53:49.668694645 +0000 UTC m=+7461.928005816" watchObservedRunningTime="2026-03-20 08:53:49.688154907 +0000 UTC m=+7461.947466058" Mar 20 08:53:53 crc kubenswrapper[5136]: E0320 08:53:53.746907 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:53 crc kubenswrapper[5136]: E0320 08:53:53.749236 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:53 crc kubenswrapper[5136]: E0320 08:53:53.750540 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:53 crc kubenswrapper[5136]: E0320 08:53:53.750573 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:58 crc kubenswrapper[5136]: I0320 08:53:58.297997 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:53:58 crc kubenswrapper[5136]: I0320 08:53:58.299504 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:53:58 crc kubenswrapper[5136]: I0320 08:53:58.578116 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 08:53:58 crc kubenswrapper[5136]: I0320 08:53:58.578705 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 08:53:58 crc kubenswrapper[5136]: E0320 08:53:58.746584 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:58 crc kubenswrapper[5136]: E0320 08:53:58.747916 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:58 crc kubenswrapper[5136]: E0320 08:53:58.749280 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:58 crc kubenswrapper[5136]: E0320 08:53:58.749346 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:59 crc kubenswrapper[5136]: I0320 08:53:59.381100 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.143:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:59 crc kubenswrapper[5136]: I0320 08:53:59.381122 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.143:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:59 crc kubenswrapper[5136]: I0320 08:53:59.590962 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.144:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:59 crc kubenswrapper[5136]: I0320 08:53:59.590973 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.144:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.163416 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566614-wvfxr"] Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.164483 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.168960 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.169109 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.174510 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566614-wvfxr"] Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.175183 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.264481 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjvkh\" (UniqueName: \"kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh\") pod \"auto-csr-approver-29566614-wvfxr\" (UID: \"746f2ae5-dabf-431a-b344-011a75049862\") " pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.366262 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjvkh\" (UniqueName: \"kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh\") pod \"auto-csr-approver-29566614-wvfxr\" (UID: \"746f2ae5-dabf-431a-b344-011a75049862\") " pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.391116 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjvkh\" (UniqueName: \"kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh\") pod \"auto-csr-approver-29566614-wvfxr\" (UID: \"746f2ae5-dabf-431a-b344-011a75049862\") " pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.507503 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.972989 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566614-wvfxr"] Mar 20 08:54:01 crc kubenswrapper[5136]: I0320 08:54:01.747020 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" event={"ID":"746f2ae5-dabf-431a-b344-011a75049862","Type":"ContainerStarted","Data":"ae22a92da0932b82609be837ba8aac293ea5e9383babbe89937642d7e6fa4ab1"} Mar 20 08:54:02 crc kubenswrapper[5136]: I0320 08:54:02.770160 5136 generic.go:334] "Generic (PLEG): container finished" podID="746f2ae5-dabf-431a-b344-011a75049862" containerID="b8341630a66939232813fa3ca2eab063f076fc3ac4ee1803ba6693cd8bb7a98d" exitCode=0 Mar 20 08:54:02 crc kubenswrapper[5136]: I0320 08:54:02.770243 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" event={"ID":"746f2ae5-dabf-431a-b344-011a75049862","Type":"ContainerDied","Data":"b8341630a66939232813fa3ca2eab063f076fc3ac4ee1803ba6693cd8bb7a98d"} Mar 20 08:54:03 crc kubenswrapper[5136]: E0320 08:54:03.745965 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4 is running failed: container process not found" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:54:03 crc kubenswrapper[5136]: E0320 08:54:03.746301 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4 is running failed: container process not found" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:54:03 crc kubenswrapper[5136]: E0320 08:54:03.746649 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4 is running failed: container process not found" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:54:03 crc kubenswrapper[5136]: E0320 08:54:03.746684 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:54:03 crc kubenswrapper[5136]: I0320 08:54:03.809017 5136 generic.go:334] "Generic (PLEG): container finished" podID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" exitCode=137 Mar 20 08:54:03 crc kubenswrapper[5136]: I0320 08:54:03.811021 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fbe2855-6fbb-40f0-bea7-43b853e673ba","Type":"ContainerDied","Data":"49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4"} Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.047731 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.127286 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.141332 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data\") pod \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.141429 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjjhz\" (UniqueName: \"kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz\") pod \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.141546 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle\") pod \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.148218 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz" (OuterVolumeSpecName: "kube-api-access-cjjhz") pod "8fbe2855-6fbb-40f0-bea7-43b853e673ba" (UID: "8fbe2855-6fbb-40f0-bea7-43b853e673ba"). InnerVolumeSpecName "kube-api-access-cjjhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.182036 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data" (OuterVolumeSpecName: "config-data") pod "8fbe2855-6fbb-40f0-bea7-43b853e673ba" (UID: "8fbe2855-6fbb-40f0-bea7-43b853e673ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.188658 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fbe2855-6fbb-40f0-bea7-43b853e673ba" (UID: "8fbe2855-6fbb-40f0-bea7-43b853e673ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.242946 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjvkh\" (UniqueName: \"kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh\") pod \"746f2ae5-dabf-431a-b344-011a75049862\" (UID: \"746f2ae5-dabf-431a-b344-011a75049862\") " Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.243418 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjjhz\" (UniqueName: \"kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.243441 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.243454 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.245966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh" (OuterVolumeSpecName: "kube-api-access-hjvkh") pod "746f2ae5-dabf-431a-b344-011a75049862" (UID: "746f2ae5-dabf-431a-b344-011a75049862"). InnerVolumeSpecName "kube-api-access-hjvkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.345507 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjvkh\" (UniqueName: \"kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.822444 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.822512 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fbe2855-6fbb-40f0-bea7-43b853e673ba","Type":"ContainerDied","Data":"06eb0e932ffea6d92e18230014df407dbddf53d2394ca0952871e976aa85a7c5"} Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.822871 5136 scope.go:117] "RemoveContainer" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.827008 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" event={"ID":"746f2ae5-dabf-431a-b344-011a75049862","Type":"ContainerDied","Data":"ae22a92da0932b82609be837ba8aac293ea5e9383babbe89937642d7e6fa4ab1"} Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.827039 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae22a92da0932b82609be837ba8aac293ea5e9383babbe89937642d7e6fa4ab1" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.827075 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.844790 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.853311 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.871960 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:54:04 crc kubenswrapper[5136]: E0320 08:54:04.872351 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.872367 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:54:04 crc kubenswrapper[5136]: E0320 08:54:04.872395 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746f2ae5-dabf-431a-b344-011a75049862" containerName="oc" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.872402 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="746f2ae5-dabf-431a-b344-011a75049862" containerName="oc" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.872563 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.872584 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="746f2ae5-dabf-431a-b344-011a75049862" containerName="oc" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.873261 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.877144 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.883764 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.961206 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.961510 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pchhm\" (UniqueName: \"kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.961751 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.063421 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.063527 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.063631 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pchhm\" (UniqueName: \"kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.076677 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.078585 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.083859 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pchhm\" (UniqueName: \"kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.195731 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566608-tdwn4"] Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.202303 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.204030 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566608-tdwn4"] Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.457321 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:54:05 crc kubenswrapper[5136]: W0320 08:54:05.462954 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41ed7c59_18ee_44ec_8068_ccc9e82485a6.slice/crio-78428f788f49ec304b60513d248e5c1585ac2ca613eb54e675058865189a70f5 WatchSource:0}: Error finding container 78428f788f49ec304b60513d248e5c1585ac2ca613eb54e675058865189a70f5: Status 404 returned error can't find the container with id 78428f788f49ec304b60513d248e5c1585ac2ca613eb54e675058865189a70f5 Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.836565 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41ed7c59-18ee-44ec-8068-ccc9e82485a6","Type":"ContainerStarted","Data":"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79"} Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.836946 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41ed7c59-18ee-44ec-8068-ccc9e82485a6","Type":"ContainerStarted","Data":"78428f788f49ec304b60513d248e5c1585ac2ca613eb54e675058865189a70f5"} Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.858507 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.858488904 podStartE2EDuration="1.858488904s" podCreationTimestamp="2026-03-20 08:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:54:05.852073616 +0000 UTC m=+7478.111384787" watchObservedRunningTime="2026-03-20 08:54:05.858488904 +0000 UTC m=+7478.117800055" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.298067 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.298142 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.427024 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" path="/var/lib/kubelet/pods/8fbe2855-6fbb-40f0-bea7-43b853e673ba/volumes" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.427850 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef82e0a5-a043-48d9-82d6-132dbf0e9b74" path="/var/lib/kubelet/pods/ef82e0a5-a043-48d9-82d6-132dbf0e9b74/volumes" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.577906 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.579064 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.335300 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.366229 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.369686 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.583902 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.584410 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.591302 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.880384 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.881615 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.051089 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.054616 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.090022 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.148739 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrdf\" (UniqueName: \"kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.148821 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.148903 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.148926 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.149030 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.250879 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcrdf\" (UniqueName: \"kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.250993 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.251080 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.251110 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.251168 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.252716 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.253477 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.253972 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.254795 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.287719 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcrdf\" (UniqueName: \"kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.378956 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.896586 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 08:54:09 crc kubenswrapper[5136]: W0320 08:54:09.900944 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf93080a1_9819_48ad_a84d_ddc2d6ffe5e6.slice/crio-8c1f27408a6ade394d6e2ab0bd5959f877552185e88f9a7307e4a1f9978accc6 WatchSource:0}: Error finding container 8c1f27408a6ade394d6e2ab0bd5959f877552185e88f9a7307e4a1f9978accc6: Status 404 returned error can't find the container with id 8c1f27408a6ade394d6e2ab0bd5959f877552185e88f9a7307e4a1f9978accc6 Mar 20 08:54:10 crc kubenswrapper[5136]: I0320 08:54:10.202517 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 08:54:10 crc kubenswrapper[5136]: I0320 08:54:10.893248 5136 generic.go:334] "Generic (PLEG): container finished" podID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerID="43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01" exitCode=0 Mar 20 08:54:10 crc kubenswrapper[5136]: I0320 08:54:10.893405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" event={"ID":"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6","Type":"ContainerDied","Data":"43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01"} Mar 20 08:54:10 crc kubenswrapper[5136]: I0320 08:54:10.893682 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" event={"ID":"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6","Type":"ContainerStarted","Data":"8c1f27408a6ade394d6e2ab0bd5959f877552185e88f9a7307e4a1f9978accc6"} Mar 20 08:54:11 crc kubenswrapper[5136]: I0320 08:54:11.617387 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:11 crc kubenswrapper[5136]: I0320 08:54:11.903184 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" event={"ID":"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6","Type":"ContainerStarted","Data":"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299"} Mar 20 08:54:11 crc kubenswrapper[5136]: I0320 08:54:11.903375 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-api" containerID="cri-o://d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb" gracePeriod=30 Mar 20 08:54:11 crc kubenswrapper[5136]: I0320 08:54:11.903588 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-log" containerID="cri-o://0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b" gracePeriod=30 Mar 20 08:54:11 crc kubenswrapper[5136]: I0320 08:54:11.930590 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" podStartSLOduration=2.930575048 podStartE2EDuration="2.930575048s" podCreationTimestamp="2026-03-20 08:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:54:11.923987194 +0000 UTC m=+7484.183298345" watchObservedRunningTime="2026-03-20 08:54:11.930575048 +0000 UTC m=+7484.189886199" Mar 20 08:54:12 crc kubenswrapper[5136]: I0320 08:54:12.913093 5136 generic.go:334] "Generic (PLEG): container finished" podID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerID="0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b" exitCode=143 Mar 20 08:54:12 crc kubenswrapper[5136]: I0320 08:54:12.913173 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerDied","Data":"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b"} Mar 20 08:54:12 crc kubenswrapper[5136]: I0320 08:54:12.913791 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.202498 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.234013 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.822524 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.822579 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.911887 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.938482 5136 generic.go:334] "Generic (PLEG): container finished" podID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerID="d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb" exitCode=0 Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.938547 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.938562 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerDied","Data":"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb"} Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.938623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerDied","Data":"68f8bcef42fc4d6d03adc36790caeff5f36c1bbf1af4d1021355e12bffa62849"} Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.938657 5136 scope.go:117] "RemoveContainer" containerID="d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.978484 5136 scope.go:117] "RemoveContainer" containerID="0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.981785 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.981795 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs\") pod \"45d40dc0-780a-4792-bbbe-d8867e1b2749\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.982367 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle\") pod \"45d40dc0-780a-4792-bbbe-d8867e1b2749\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.982445 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x56v2\" (UniqueName: \"kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2\") pod \"45d40dc0-780a-4792-bbbe-d8867e1b2749\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.982639 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data\") pod \"45d40dc0-780a-4792-bbbe-d8867e1b2749\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.983519 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs" (OuterVolumeSpecName: "logs") pod "45d40dc0-780a-4792-bbbe-d8867e1b2749" (UID: "45d40dc0-780a-4792-bbbe-d8867e1b2749"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.984656 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.991119 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2" (OuterVolumeSpecName: "kube-api-access-x56v2") pod "45d40dc0-780a-4792-bbbe-d8867e1b2749" (UID: "45d40dc0-780a-4792-bbbe-d8867e1b2749"). InnerVolumeSpecName "kube-api-access-x56v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.026760 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45d40dc0-780a-4792-bbbe-d8867e1b2749" (UID: "45d40dc0-780a-4792-bbbe-d8867e1b2749"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.047661 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data" (OuterVolumeSpecName: "config-data") pod "45d40dc0-780a-4792-bbbe-d8867e1b2749" (UID: "45d40dc0-780a-4792-bbbe-d8867e1b2749"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.086608 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.086646 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.086659 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x56v2\" (UniqueName: \"kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.118109 5136 scope.go:117] "RemoveContainer" containerID="d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb" Mar 20 08:54:16 crc kubenswrapper[5136]: E0320 08:54:16.118768 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb\": container with ID starting with d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb not found: ID does not exist" containerID="d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.118953 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb"} err="failed to get container status \"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb\": rpc error: code = NotFound desc = could not find container \"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb\": container with ID starting with d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb not found: ID does not exist" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.119045 5136 scope.go:117] "RemoveContainer" containerID="0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b" Mar 20 08:54:16 crc kubenswrapper[5136]: E0320 08:54:16.119475 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b\": container with ID starting with 0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b not found: ID does not exist" containerID="0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.119516 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b"} err="failed to get container status \"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b\": rpc error: code = NotFound desc = could not find container \"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b\": container with ID starting with 0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b not found: ID does not exist" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.274260 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.290162 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.302313 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:16 crc kubenswrapper[5136]: E0320 08:54:16.302868 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-log" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.302888 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-log" Mar 20 08:54:16 crc kubenswrapper[5136]: E0320 08:54:16.302915 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-api" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.302924 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-api" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.303168 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-log" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.303199 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-api" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.304251 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.306712 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.307097 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.307142 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.310245 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.392719 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.393650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.393712 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.393732 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.393981 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bfx\" (UniqueName: \"kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.394093 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.407635 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" path="/var/lib/kubelet/pods/45d40dc0-780a-4792-bbbe-d8867e1b2749/volumes" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496100 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496361 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496500 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496575 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496703 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7bfx\" (UniqueName: \"kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496790 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496530 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.501261 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.503028 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.503235 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.504793 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.516743 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7bfx\" (UniqueName: \"kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.636997 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:54:17 crc kubenswrapper[5136]: I0320 08:54:17.061572 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:17 crc kubenswrapper[5136]: I0320 08:54:17.957536 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerStarted","Data":"bd88353eb3bfead6453753b043892b43c76148c22dbdd8749c35d5213cf8d63b"} Mar 20 08:54:17 crc kubenswrapper[5136]: I0320 08:54:17.958172 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerStarted","Data":"69c8be45f764ed420f7bbef558c7c52b3207d932e0c8d1c5e50585f4ba78387d"} Mar 20 08:54:17 crc kubenswrapper[5136]: I0320 08:54:17.958184 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerStarted","Data":"0bd47d70acb181d12064205fc44377458fa88b9280bd44fea4624c5f756f1398"} Mar 20 08:54:17 crc kubenswrapper[5136]: I0320 08:54:17.985546 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.985509481 podStartE2EDuration="1.985509481s" podCreationTimestamp="2026-03-20 08:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:54:17.984579732 +0000 UTC m=+7490.243890913" watchObservedRunningTime="2026-03-20 08:54:17.985509481 +0000 UTC m=+7490.244820672" Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.380758 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.444187 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.444428 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="dnsmasq-dns" containerID="cri-o://d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d" gracePeriod=10 Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.927453 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.976160 5136 generic.go:334] "Generic (PLEG): container finished" podID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerID="d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d" exitCode=0 Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.976221 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.976218 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" event={"ID":"a53d28b6-bc47-4aa3-a413-3716651dc331","Type":"ContainerDied","Data":"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d"} Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.976313 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" event={"ID":"a53d28b6-bc47-4aa3-a413-3716651dc331","Type":"ContainerDied","Data":"3a00b31f5b938544931b8ec8a179f9b8845385e169e4d748a404beb81b702299"} Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.976338 5136 scope.go:117] "RemoveContainer" containerID="d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d" Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.997710 5136 scope.go:117] "RemoveContainer" containerID="e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.037028 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-blnd4"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.050651 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3614-account-create-update-dp5t6"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.055880 5136 scope.go:117] "RemoveContainer" containerID="d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d" Mar 20 08:54:20 crc kubenswrapper[5136]: E0320 08:54:20.058026 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d\": container with ID starting with d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d not found: ID does not exist" containerID="d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.058071 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d"} err="failed to get container status \"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d\": rpc error: code = NotFound desc = could not find container \"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d\": container with ID starting with d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d not found: ID does not exist" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.058109 5136 scope.go:117] "RemoveContainer" containerID="e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2" Mar 20 08:54:20 crc kubenswrapper[5136]: E0320 08:54:20.058471 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2\": container with ID starting with e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2 not found: ID does not exist" containerID="e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.058514 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2"} err="failed to get container status \"e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2\": rpc error: code = NotFound desc = could not find container \"e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2\": container with ID starting with e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2 not found: ID does not exist" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.062083 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-blnd4"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.066152 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config\") pod \"a53d28b6-bc47-4aa3-a413-3716651dc331\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.066284 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc\") pod \"a53d28b6-bc47-4aa3-a413-3716651dc331\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.066326 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb\") pod \"a53d28b6-bc47-4aa3-a413-3716651dc331\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.066432 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb\") pod \"a53d28b6-bc47-4aa3-a413-3716651dc331\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.066494 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cn8k\" (UniqueName: \"kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k\") pod \"a53d28b6-bc47-4aa3-a413-3716651dc331\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.074037 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3614-account-create-update-dp5t6"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.088031 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k" (OuterVolumeSpecName: "kube-api-access-7cn8k") pod "a53d28b6-bc47-4aa3-a413-3716651dc331" (UID: "a53d28b6-bc47-4aa3-a413-3716651dc331"). InnerVolumeSpecName "kube-api-access-7cn8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.120148 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a53d28b6-bc47-4aa3-a413-3716651dc331" (UID: "a53d28b6-bc47-4aa3-a413-3716651dc331"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.121459 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config" (OuterVolumeSpecName: "config") pod "a53d28b6-bc47-4aa3-a413-3716651dc331" (UID: "a53d28b6-bc47-4aa3-a413-3716651dc331"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.143352 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a53d28b6-bc47-4aa3-a413-3716651dc331" (UID: "a53d28b6-bc47-4aa3-a413-3716651dc331"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.154541 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a53d28b6-bc47-4aa3-a413-3716651dc331" (UID: "a53d28b6-bc47-4aa3-a413-3716651dc331"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.168411 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.168441 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.168485 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.168496 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cn8k\" (UniqueName: \"kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.168505 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.305354 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.314626 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.406265 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0749652f-3995-4e34-ba17-55eac4c3530c" path="/var/lib/kubelet/pods/0749652f-3995-4e34-ba17-55eac4c3530c/volumes" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.406773 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fb13f3a-3785-4650-8381-e4d5e6fa7f73" path="/var/lib/kubelet/pods/0fb13f3a-3785-4650-8381-e4d5e6fa7f73/volumes" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.407448 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" path="/var/lib/kubelet/pods/a53d28b6-bc47-4aa3-a413-3716651dc331/volumes" Mar 20 08:54:26 crc kubenswrapper[5136]: I0320 08:54:26.638064 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:54:26 crc kubenswrapper[5136]: I0320 08:54:26.638631 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:54:27 crc kubenswrapper[5136]: I0320 08:54:27.659015 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.148:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:54:27 crc kubenswrapper[5136]: I0320 08:54:27.659092 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.148:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:54:32 crc kubenswrapper[5136]: I0320 08:54:32.032479 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-62shw"] Mar 20 08:54:32 crc kubenswrapper[5136]: I0320 08:54:32.041701 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-62shw"] Mar 20 08:54:32 crc kubenswrapper[5136]: I0320 08:54:32.413427 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e9b60d-f307-406d-9085-fbd9d8b67cf5" path="/var/lib/kubelet/pods/21e9b60d-f307-406d-9085-fbd9d8b67cf5/volumes" Mar 20 08:54:34 crc kubenswrapper[5136]: I0320 08:54:34.637261 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:54:34 crc kubenswrapper[5136]: I0320 08:54:34.637547 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:54:36 crc kubenswrapper[5136]: I0320 08:54:36.648485 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 08:54:36 crc kubenswrapper[5136]: I0320 08:54:36.649174 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 08:54:36 crc kubenswrapper[5136]: I0320 08:54:36.662214 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 08:54:37 crc kubenswrapper[5136]: I0320 08:54:37.169699 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 08:54:39 crc kubenswrapper[5136]: I0320 08:54:39.552802 5136 scope.go:117] "RemoveContainer" containerID="ae45294b801e93d47563db9ba4054a170a4f53699928ebbc069e3e19b4610e4f" Mar 20 08:54:39 crc kubenswrapper[5136]: I0320 08:54:39.572702 5136 scope.go:117] "RemoveContainer" containerID="943e6011fb2bb8f85aa7e1232523d7da6d707090421691ca85ab0e7998c29b98" Mar 20 08:54:39 crc kubenswrapper[5136]: I0320 08:54:39.628029 5136 scope.go:117] "RemoveContainer" containerID="fa17c24ddb45b337fff5f936348cd486ca94ec7240798c49bc135753e4d62ff4" Mar 20 08:54:39 crc kubenswrapper[5136]: I0320 08:54:39.671912 5136 scope.go:117] "RemoveContainer" containerID="ea20f727b60adf5691fc1981831b3690eb78cdb64d09999efea953786d4a4eb5" Mar 20 08:54:45 crc kubenswrapper[5136]: I0320 08:54:45.822366 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:54:45 crc kubenswrapper[5136]: I0320 08:54:45.822900 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:54:47 crc kubenswrapper[5136]: I0320 08:54:47.059164 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-645md"] Mar 20 08:54:47 crc kubenswrapper[5136]: I0320 08:54:47.069693 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-645md"] Mar 20 08:54:48 crc kubenswrapper[5136]: I0320 08:54:48.410780 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e023c878-7ddf-478a-9069-85d32b1d5bf9" path="/var/lib/kubelet/pods/e023c878-7ddf-478a-9069-85d32b1d5bf9/volumes" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.010929 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:54:49 crc kubenswrapper[5136]: E0320 08:54:49.011376 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="init" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.011397 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="init" Mar 20 08:54:49 crc kubenswrapper[5136]: E0320 08:54:49.011421 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="dnsmasq-dns" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.011430 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="dnsmasq-dns" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.011645 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="dnsmasq-dns" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.012759 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.016268 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.016509 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.016636 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.016783 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-cz4mq" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.032261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.032417 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.032445 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.032497 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqxv\" (UniqueName: \"kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.032547 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.045866 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.061589 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.061992 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-log" containerID="cri-o://de0681385b751e6fbc3db0d7c1a647f36f508772ed5c4a96334e28fe70febb26" gracePeriod=30 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.062182 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-httpd" containerID="cri-o://10a9b30b7e2523cc2c1ff704e0f7c5ae56f024de6f82bf322b7b1d7e0001c420" gracePeriod=30 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.133469 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.133770 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-log" containerID="cri-o://ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3" gracePeriod=30 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.133901 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-httpd" containerID="cri-o://50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c" gracePeriod=30 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.135538 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.135590 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.135652 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqxv\" (UniqueName: \"kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.135705 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.135864 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.136196 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.137187 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.137588 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.150435 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.157196 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqxv\" (UniqueName: \"kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.166628 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.168326 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.195101 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.237659 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.237729 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.237802 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.237849 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxbz\" (UniqueName: \"kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.237886 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.282535 5136 generic.go:334] "Generic (PLEG): container finished" podID="c50cd831-27ab-475b-a608-0558c610394d" containerID="ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3" exitCode=143 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.282613 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerDied","Data":"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3"} Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.286620 5136 generic.go:334] "Generic (PLEG): container finished" podID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerID="de0681385b751e6fbc3db0d7c1a647f36f508772ed5c4a96334e28fe70febb26" exitCode=143 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.286665 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerDied","Data":"de0681385b751e6fbc3db0d7c1a647f36f508772ed5c4a96334e28fe70febb26"} Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339321 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jxbz\" (UniqueName: \"kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339362 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339475 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339717 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.340138 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.340393 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.340726 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.342841 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.361847 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jxbz\" (UniqueName: \"kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.586504 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.813587 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.824635 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.010153 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:54:50 crc kubenswrapper[5136]: W0320 08:54:50.017621 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e26843c_d392_464f_9f00_df9da3231a43.slice/crio-c9ee7ff456ee288905bd3817b56fde5fab98d219327f5c2603f495ba149fd86f WatchSource:0}: Error finding container c9ee7ff456ee288905bd3817b56fde5fab98d219327f5c2603f495ba149fd86f: Status 404 returned error can't find the container with id c9ee7ff456ee288905bd3817b56fde5fab98d219327f5c2603f495ba149fd86f Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.298157 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerStarted","Data":"5c552b108729deb67449c97043a73b66ec936eb1de7cf4e53f9755a54666ffef"} Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.299623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerStarted","Data":"c9ee7ff456ee288905bd3817b56fde5fab98d219327f5c2603f495ba149fd86f"} Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.817041 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.851773 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.853366 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.864146 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.890311 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908304 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908368 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908409 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908431 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908473 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f297l\" (UniqueName: \"kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908491 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908534 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.956913 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.995312 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.997201 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.010877 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f297l\" (UniqueName: \"kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.010914 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.010970 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.011016 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.011060 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.011100 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.011124 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.011894 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.014918 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.017781 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.018242 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.020582 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.033422 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.034120 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f297l\" (UniqueName: \"kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.038226 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.112468 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.112532 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.112649 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.112680 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6q7\" (UniqueName: \"kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.112720 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.113082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.113229 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.183103 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215118 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215206 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215263 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215288 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215364 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215386 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj6q7\" (UniqueName: \"kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215444 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215663 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.216160 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.217379 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.219731 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.220320 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.220785 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.238642 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj6q7\" (UniqueName: \"kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.415397 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.685680 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.901688 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:54:51 crc kubenswrapper[5136]: W0320 08:54:51.912454 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07e0c938_d0f6_43dc_8864_68149aedc96c.slice/crio-b1ee61ebca0b68cd44117cc6cff786c05a69987a3f3427a20fa4db8597ed371f WatchSource:0}: Error finding container b1ee61ebca0b68cd44117cc6cff786c05a69987a3f3427a20fa4db8597ed371f: Status 404 returned error can't find the container with id b1ee61ebca0b68cd44117cc6cff786c05a69987a3f3427a20fa4db8597ed371f Mar 20 08:54:52 crc kubenswrapper[5136]: I0320 08:54:52.365664 5136 generic.go:334] "Generic (PLEG): container finished" podID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerID="10a9b30b7e2523cc2c1ff704e0f7c5ae56f024de6f82bf322b7b1d7e0001c420" exitCode=0 Mar 20 08:54:52 crc kubenswrapper[5136]: I0320 08:54:52.365707 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerDied","Data":"10a9b30b7e2523cc2c1ff704e0f7c5ae56f024de6f82bf322b7b1d7e0001c420"} Mar 20 08:54:52 crc kubenswrapper[5136]: I0320 08:54:52.368154 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerStarted","Data":"b1ee61ebca0b68cd44117cc6cff786c05a69987a3f3427a20fa4db8597ed371f"} Mar 20 08:54:52 crc kubenswrapper[5136]: I0320 08:54:52.369867 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerStarted","Data":"21c759da75cd0657471e5a123c71e4531a1ea5ab95f97bf42da6363bf11ca95c"} Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.041827 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.054878 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175033 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175386 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175431 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b6rl\" (UniqueName: \"kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175495 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175516 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175552 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175613 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175694 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175760 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175794 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175844 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175882 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175922 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l26sf\" (UniqueName: \"kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175947 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.177348 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs" (OuterVolumeSpecName: "logs") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.177324 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs" (OuterVolumeSpecName: "logs") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.177631 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.178318 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.190042 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts" (OuterVolumeSpecName: "scripts") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.190187 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl" (OuterVolumeSpecName: "kube-api-access-7b6rl") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "kube-api-access-7b6rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.190433 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf" (OuterVolumeSpecName: "kube-api-access-l26sf") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "kube-api-access-l26sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.195892 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts" (OuterVolumeSpecName: "scripts") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.265076 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data" (OuterVolumeSpecName: "config-data") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.265159 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.268317 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data" (OuterVolumeSpecName: "config-data") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.274024 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.275617 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.276249 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279021 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b6rl\" (UniqueName: \"kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279058 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279069 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279137 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279157 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279169 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279427 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279441 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279449 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279461 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279491 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l26sf\" (UniqueName: \"kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279499 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279507 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279517 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.387052 5136 generic.go:334] "Generic (PLEG): container finished" podID="c50cd831-27ab-475b-a608-0558c610394d" containerID="50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c" exitCode=0 Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.387129 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerDied","Data":"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c"} Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.387156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerDied","Data":"0a442b375725a08359ac9c238f48642a4c758f6fef43750c9ef6734e62c274b1"} Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.387175 5136 scope.go:117] "RemoveContainer" containerID="50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.387307 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.391710 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerDied","Data":"18bbdbf6b4b7085096b8a4c5650b4a999121b8fffe8ad31c3a29f6c89c1e9ff8"} Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.391763 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.438713 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.451895 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.460566 5136 scope.go:117] "RemoveContainer" containerID="ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.467880 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.468318 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468332 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.468356 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468362 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.468367 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468374 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.468390 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468407 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468604 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468619 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468630 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468638 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.469603 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.471834 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.472462 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zsfgx" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.472680 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.472806 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.477408 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.488584 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.510539 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.519891 5136 scope.go:117] "RemoveContainer" containerID="50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.520216 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.521327 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c\": container with ID starting with 50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c not found: ID does not exist" containerID="50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.521393 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c"} err="failed to get container status \"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c\": rpc error: code = NotFound desc = could not find container \"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c\": container with ID starting with 50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c not found: ID does not exist" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.521421 5136 scope.go:117] "RemoveContainer" containerID="ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3" Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.521849 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3\": container with ID starting with ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3 not found: ID does not exist" containerID="ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.521876 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.521874 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3"} err="failed to get container status \"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3\": rpc error: code = NotFound desc = could not find container \"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3\": container with ID starting with ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3 not found: ID does not exist" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.521939 5136 scope.go:117] "RemoveContainer" containerID="10a9b30b7e2523cc2c1ff704e0f7c5ae56f024de6f82bf322b7b1d7e0001c420" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.525939 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.525994 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.551460 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.571368 5136 scope.go:117] "RemoveContainer" containerID="de0681385b751e6fbc3db0d7c1a647f36f508772ed5c4a96334e28fe70febb26" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.586458 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.586640 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.586683 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.586731 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.586945 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587022 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587067 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt97q\" (UniqueName: \"kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587085 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587134 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxpnx\" (UniqueName: \"kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587183 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587264 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587307 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587427 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587470 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689391 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689452 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689476 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689491 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689541 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689565 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689581 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt97q\" (UniqueName: \"kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689599 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689615 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxpnx\" (UniqueName: \"kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689639 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689665 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689686 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689729 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689770 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.690213 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.690773 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.691075 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.691268 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.697313 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.697344 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.697902 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.702601 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.702876 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.706291 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.708369 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.708581 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxpnx\" (UniqueName: \"kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.710148 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt97q\" (UniqueName: \"kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.710582 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.804391 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.846243 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:54:54 crc kubenswrapper[5136]: I0320 08:54:54.406273 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" path="/var/lib/kubelet/pods/416c7b2f-db10-4191-821f-19c79bf4a3b6/volumes" Mar 20 08:54:54 crc kubenswrapper[5136]: I0320 08:54:54.407318 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c50cd831-27ab-475b-a608-0558c610394d" path="/var/lib/kubelet/pods/c50cd831-27ab-475b-a608-0558c610394d/volumes" Mar 20 08:54:58 crc kubenswrapper[5136]: I0320 08:54:58.950516 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:58 crc kubenswrapper[5136]: W0320 08:54:58.954029 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6345b1ce_d7d2_420d_8631_e42fd662d790.slice/crio-118347a18261bbd9231e7dfffb48c3b8dac9d276aacba0e195348777f97cd490 WatchSource:0}: Error finding container 118347a18261bbd9231e7dfffb48c3b8dac9d276aacba0e195348777f97cd490: Status 404 returned error can't find the container with id 118347a18261bbd9231e7dfffb48c3b8dac9d276aacba0e195348777f97cd490 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.067449 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:59 crc kubenswrapper[5136]: W0320 08:54:59.076025 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25254bce_daf4_4521_ae48_e6c53e458cb4.slice/crio-b4041a772e07ae38dd21f6daf35e3b02ea073600ec0a68c9ba11fe62a374af18 WatchSource:0}: Error finding container b4041a772e07ae38dd21f6daf35e3b02ea073600ec0a68c9ba11fe62a374af18: Status 404 returned error can't find the container with id b4041a772e07ae38dd21f6daf35e3b02ea073600ec0a68c9ba11fe62a374af18 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.449078 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerStarted","Data":"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.449117 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerStarted","Data":"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.450789 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerStarted","Data":"47d50bc4004ffaf61df2e3967c92d24ab0f3d7825ac3b5d458bf8fcc8f180000"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.450858 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerStarted","Data":"3c83219cfa1ecf8b240d76c9e51d2ba3d7c3e62fbdf8df4070902cafed1bfe2b"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.450966 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66d59c77bf-fzn52" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon-log" containerID="cri-o://3c83219cfa1ecf8b240d76c9e51d2ba3d7c3e62fbdf8df4070902cafed1bfe2b" gracePeriod=30 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.451196 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66d59c77bf-fzn52" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon" containerID="cri-o://47d50bc4004ffaf61df2e3967c92d24ab0f3d7825ac3b5d458bf8fcc8f180000" gracePeriod=30 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.452276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerStarted","Data":"118347a18261bbd9231e7dfffb48c3b8dac9d276aacba0e195348777f97cd490"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.459276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerStarted","Data":"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.459325 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerStarted","Data":"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.462187 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerStarted","Data":"b4041a772e07ae38dd21f6daf35e3b02ea073600ec0a68c9ba11fe62a374af18"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.467757 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerStarted","Data":"eca3fb3cc4f1e7c4ed22ce895d18b9a727e745be0196ad15ebd13d381ede98bd"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.467807 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerStarted","Data":"cb14bd3149719e9313b803ec07b4e48ef12a8424fc422a0ea3648d52b2537bf1"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.467922 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-645f4b9fd9-z58jz" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon-log" containerID="cri-o://cb14bd3149719e9313b803ec07b4e48ef12a8424fc422a0ea3648d52b2537bf1" gracePeriod=30 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.467944 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-645f4b9fd9-z58jz" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon" containerID="cri-o://eca3fb3cc4f1e7c4ed22ce895d18b9a727e745be0196ad15ebd13d381ede98bd" gracePeriod=30 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.475531 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cc6c6d576-wrwl5" podStartSLOduration=2.688245107 podStartE2EDuration="9.475512489s" podCreationTimestamp="2026-03-20 08:54:50 +0000 UTC" firstStartedPulling="2026-03-20 08:54:51.702154228 +0000 UTC m=+7523.961465379" lastFinishedPulling="2026-03-20 08:54:58.48942161 +0000 UTC m=+7530.748732761" observedRunningTime="2026-03-20 08:54:59.471592669 +0000 UTC m=+7531.730903820" watchObservedRunningTime="2026-03-20 08:54:59.475512489 +0000 UTC m=+7531.734823640" Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.497546 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-645f4b9fd9-z58jz" podStartSLOduration=2.765432888 podStartE2EDuration="11.497528112s" podCreationTimestamp="2026-03-20 08:54:48 +0000 UTC" firstStartedPulling="2026-03-20 08:54:49.824373002 +0000 UTC m=+7522.083684173" lastFinishedPulling="2026-03-20 08:54:58.556468246 +0000 UTC m=+7530.815779397" observedRunningTime="2026-03-20 08:54:59.491512795 +0000 UTC m=+7531.750823946" watchObservedRunningTime="2026-03-20 08:54:59.497528112 +0000 UTC m=+7531.756839263" Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.514775 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66d59c77bf-fzn52" podStartSLOduration=2.044896158 podStartE2EDuration="10.514759434s" podCreationTimestamp="2026-03-20 08:54:49 +0000 UTC" firstStartedPulling="2026-03-20 08:54:50.019671458 +0000 UTC m=+7522.278982609" lastFinishedPulling="2026-03-20 08:54:58.489534734 +0000 UTC m=+7530.748845885" observedRunningTime="2026-03-20 08:54:59.513991501 +0000 UTC m=+7531.773302652" watchObservedRunningTime="2026-03-20 08:54:59.514759434 +0000 UTC m=+7531.774070585" Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.543233 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-96f64bfb8-g7cfv" podStartSLOduration=2.901387295 podStartE2EDuration="9.543211855s" podCreationTimestamp="2026-03-20 08:54:50 +0000 UTC" firstStartedPulling="2026-03-20 08:54:51.914147861 +0000 UTC m=+7524.173459002" lastFinishedPulling="2026-03-20 08:54:58.555972411 +0000 UTC m=+7530.815283562" observedRunningTime="2026-03-20 08:54:59.531631777 +0000 UTC m=+7531.790942938" watchObservedRunningTime="2026-03-20 08:54:59.543211855 +0000 UTC m=+7531.802523006" Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.594008 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.479357 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerStarted","Data":"8e6a89b054dab23d4263c5fb97b6aba8bc51276e7bd2c8d9be34c61f68879a63"} Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.480007 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerStarted","Data":"62b81b8fc1d95273635ec6d0f69c524950ac024e8fd9b6ac1d7381fe6f428b6f"} Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.483850 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerStarted","Data":"7ecfa88277a19c2fc4a9782c7beb9c21c6c1a5a38d56723b3e67fc0044f8bbb4"} Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.483891 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerStarted","Data":"9e5b58fa90ab6a9a965276a68d4ee135aa252e61fbc159c5d0aa6f6134637333"} Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.508567 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.5085468330000005 podStartE2EDuration="7.508546833s" podCreationTimestamp="2026-03-20 08:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:55:00.505461698 +0000 UTC m=+7532.764772849" watchObservedRunningTime="2026-03-20 08:55:00.508546833 +0000 UTC m=+7532.767857984" Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.547983 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.547957383 podStartE2EDuration="7.547957383s" podCreationTimestamp="2026-03-20 08:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:55:00.53234352 +0000 UTC m=+7532.791654671" watchObservedRunningTime="2026-03-20 08:55:00.547957383 +0000 UTC m=+7532.807268534" Mar 20 08:55:01 crc kubenswrapper[5136]: I0320 08:55:01.183691 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:55:01 crc kubenswrapper[5136]: I0320 08:55:01.184109 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:55:01 crc kubenswrapper[5136]: I0320 08:55:01.416086 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:55:01 crc kubenswrapper[5136]: I0320 08:55:01.416160 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.805007 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.805070 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.838582 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.847235 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.847277 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.849382 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.881529 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.897073 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 08:55:04 crc kubenswrapper[5136]: I0320 08:55:04.531389 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:04 crc kubenswrapper[5136]: I0320 08:55:04.531513 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 08:55:04 crc kubenswrapper[5136]: I0320 08:55:04.531533 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:04 crc kubenswrapper[5136]: I0320 08:55:04.531548 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 08:55:06 crc kubenswrapper[5136]: I0320 08:55:06.741359 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:06 crc kubenswrapper[5136]: I0320 08:55:06.744141 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:07 crc kubenswrapper[5136]: I0320 08:55:07.294371 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 08:55:07 crc kubenswrapper[5136]: I0320 08:55:07.516196 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 08:55:09 crc kubenswrapper[5136]: I0320 08:55:09.340311 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:55:11 crc kubenswrapper[5136]: I0320 08:55:11.185200 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.151:8443: connect: connection refused" Mar 20 08:55:11 crc kubenswrapper[5136]: I0320 08:55:11.418506 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.152:8443: connect: connection refused" Mar 20 08:55:15 crc kubenswrapper[5136]: I0320 08:55:15.821733 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:55:15 crc kubenswrapper[5136]: I0320 08:55:15.822079 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:55:15 crc kubenswrapper[5136]: I0320 08:55:15.822129 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:55:15 crc kubenswrapper[5136]: I0320 08:55:15.822871 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:55:15 crc kubenswrapper[5136]: I0320 08:55:15.822920 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" gracePeriod=600 Mar 20 08:55:15 crc kubenswrapper[5136]: E0320 08:55:15.954034 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:55:16 crc kubenswrapper[5136]: I0320 08:55:16.680193 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" exitCode=0 Mar 20 08:55:16 crc kubenswrapper[5136]: I0320 08:55:16.680457 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2"} Mar 20 08:55:16 crc kubenswrapper[5136]: I0320 08:55:16.680577 5136 scope.go:117] "RemoveContainer" containerID="a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62" Mar 20 08:55:16 crc kubenswrapper[5136]: I0320 08:55:16.681612 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:55:16 crc kubenswrapper[5136]: E0320 08:55:16.681920 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:55:23 crc kubenswrapper[5136]: I0320 08:55:23.278966 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:55:23 crc kubenswrapper[5136]: I0320 08:55:23.332711 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:55:24 crc kubenswrapper[5136]: I0320 08:55:24.963098 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:55:25 crc kubenswrapper[5136]: I0320 08:55:25.030355 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:55:25 crc kubenswrapper[5136]: I0320 08:55:25.031346 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon-log" containerID="cri-o://83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c" gracePeriod=30 Mar 20 08:55:25 crc kubenswrapper[5136]: I0320 08:55:25.031550 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" containerID="cri-o://87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041" gracePeriod=30 Mar 20 08:55:25 crc kubenswrapper[5136]: I0320 08:55:25.042678 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Mar 20 08:55:28 crc kubenswrapper[5136]: I0320 08:55:28.170076 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:47534->10.217.1.151:8443: read: connection reset by peer" Mar 20 08:55:28 crc kubenswrapper[5136]: I0320 08:55:28.795121 5136 generic.go:334] "Generic (PLEG): container finished" podID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerID="87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041" exitCode=0 Mar 20 08:55:28 crc kubenswrapper[5136]: I0320 08:55:28.795177 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerDied","Data":"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041"} Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.808576 5136 generic.go:334] "Generic (PLEG): container finished" podID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerID="eca3fb3cc4f1e7c4ed22ce895d18b9a727e745be0196ad15ebd13d381ede98bd" exitCode=137 Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.808950 5136 generic.go:334] "Generic (PLEG): container finished" podID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerID="cb14bd3149719e9313b803ec07b4e48ef12a8424fc422a0ea3648d52b2537bf1" exitCode=137 Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.808656 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerDied","Data":"eca3fb3cc4f1e7c4ed22ce895d18b9a727e745be0196ad15ebd13d381ede98bd"} Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.809033 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerDied","Data":"cb14bd3149719e9313b803ec07b4e48ef12a8424fc422a0ea3648d52b2537bf1"} Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.811023 5136 generic.go:334] "Generic (PLEG): container finished" podID="9e26843c-d392-464f-9f00-df9da3231a43" containerID="47d50bc4004ffaf61df2e3967c92d24ab0f3d7825ac3b5d458bf8fcc8f180000" exitCode=137 Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.811042 5136 generic.go:334] "Generic (PLEG): container finished" podID="9e26843c-d392-464f-9f00-df9da3231a43" containerID="3c83219cfa1ecf8b240d76c9e51d2ba3d7c3e62fbdf8df4070902cafed1bfe2b" exitCode=137 Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.811059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerDied","Data":"47d50bc4004ffaf61df2e3967c92d24ab0f3d7825ac3b5d458bf8fcc8f180000"} Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.811081 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerDied","Data":"3c83219cfa1ecf8b240d76c9e51d2ba3d7c3e62fbdf8df4070902cafed1bfe2b"} Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.940297 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.946294 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.089972 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts\") pod \"d6a4482e-e21a-4e56-af69-e824ef4708da\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090018 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key\") pod \"9e26843c-d392-464f-9f00-df9da3231a43\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090089 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs\") pod \"9e26843c-d392-464f-9f00-df9da3231a43\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090116 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vqxv\" (UniqueName: \"kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv\") pod \"d6a4482e-e21a-4e56-af69-e824ef4708da\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090170 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts\") pod \"9e26843c-d392-464f-9f00-df9da3231a43\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090190 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs\") pod \"d6a4482e-e21a-4e56-af69-e824ef4708da\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090242 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key\") pod \"d6a4482e-e21a-4e56-af69-e824ef4708da\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090287 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data\") pod \"d6a4482e-e21a-4e56-af69-e824ef4708da\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090361 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data\") pod \"9e26843c-d392-464f-9f00-df9da3231a43\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090397 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jxbz\" (UniqueName: \"kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz\") pod \"9e26843c-d392-464f-9f00-df9da3231a43\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090775 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs" (OuterVolumeSpecName: "logs") pod "9e26843c-d392-464f-9f00-df9da3231a43" (UID: "9e26843c-d392-464f-9f00-df9da3231a43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.092991 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs" (OuterVolumeSpecName: "logs") pod "d6a4482e-e21a-4e56-af69-e824ef4708da" (UID: "d6a4482e-e21a-4e56-af69-e824ef4708da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.095455 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9e26843c-d392-464f-9f00-df9da3231a43" (UID: "9e26843c-d392-464f-9f00-df9da3231a43"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.096252 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv" (OuterVolumeSpecName: "kube-api-access-7vqxv") pod "d6a4482e-e21a-4e56-af69-e824ef4708da" (UID: "d6a4482e-e21a-4e56-af69-e824ef4708da"). InnerVolumeSpecName "kube-api-access-7vqxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.097458 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d6a4482e-e21a-4e56-af69-e824ef4708da" (UID: "d6a4482e-e21a-4e56-af69-e824ef4708da"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.099237 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz" (OuterVolumeSpecName: "kube-api-access-4jxbz") pod "9e26843c-d392-464f-9f00-df9da3231a43" (UID: "9e26843c-d392-464f-9f00-df9da3231a43"). InnerVolumeSpecName "kube-api-access-4jxbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.116091 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts" (OuterVolumeSpecName: "scripts") pod "9e26843c-d392-464f-9f00-df9da3231a43" (UID: "9e26843c-d392-464f-9f00-df9da3231a43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.116157 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts" (OuterVolumeSpecName: "scripts") pod "d6a4482e-e21a-4e56-af69-e824ef4708da" (UID: "d6a4482e-e21a-4e56-af69-e824ef4708da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.119295 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data" (OuterVolumeSpecName: "config-data") pod "d6a4482e-e21a-4e56-af69-e824ef4708da" (UID: "d6a4482e-e21a-4e56-af69-e824ef4708da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.123328 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data" (OuterVolumeSpecName: "config-data") pod "9e26843c-d392-464f-9f00-df9da3231a43" (UID: "9e26843c-d392-464f-9f00-df9da3231a43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192520 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192565 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jxbz\" (UniqueName: \"kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192578 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192586 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192594 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192602 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vqxv\" (UniqueName: \"kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192610 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192618 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192626 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192635 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.397349 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:55:30 crc kubenswrapper[5136]: E0320 08:55:30.397578 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.821948 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerDied","Data":"5c552b108729deb67449c97043a73b66ec936eb1de7cf4e53f9755a54666ffef"} Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.822301 5136 scope.go:117] "RemoveContainer" containerID="eca3fb3cc4f1e7c4ed22ce895d18b9a727e745be0196ad15ebd13d381ede98bd" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.822074 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.825214 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerDied","Data":"c9ee7ff456ee288905bd3817b56fde5fab98d219327f5c2603f495ba149fd86f"} Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.825382 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.872153 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.885209 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.894872 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.909517 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.992833 5136 scope.go:117] "RemoveContainer" containerID="cb14bd3149719e9313b803ec07b4e48ef12a8424fc422a0ea3648d52b2537bf1" Mar 20 08:55:31 crc kubenswrapper[5136]: I0320 08:55:31.019514 5136 scope.go:117] "RemoveContainer" containerID="47d50bc4004ffaf61df2e3967c92d24ab0f3d7825ac3b5d458bf8fcc8f180000" Mar 20 08:55:31 crc kubenswrapper[5136]: I0320 08:55:31.178275 5136 scope.go:117] "RemoveContainer" containerID="3c83219cfa1ecf8b240d76c9e51d2ba3d7c3e62fbdf8df4070902cafed1bfe2b" Mar 20 08:55:31 crc kubenswrapper[5136]: I0320 08:55:31.184247 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.151:8443: connect: connection refused" Mar 20 08:55:32 crc kubenswrapper[5136]: I0320 08:55:32.406731 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e26843c-d392-464f-9f00-df9da3231a43" path="/var/lib/kubelet/pods/9e26843c-d392-464f-9f00-df9da3231a43/volumes" Mar 20 08:55:32 crc kubenswrapper[5136]: I0320 08:55:32.407786 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" path="/var/lib/kubelet/pods/d6a4482e-e21a-4e56-af69-e824ef4708da/volumes" Mar 20 08:55:39 crc kubenswrapper[5136]: I0320 08:55:39.835872 5136 scope.go:117] "RemoveContainer" containerID="7a4fb348e084d0c108a5953823b245c56381a86254bd1ec8ccef0cb8f458e61f" Mar 20 08:55:41 crc kubenswrapper[5136]: I0320 08:55:41.184100 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.151:8443: connect: connection refused" Mar 20 08:55:45 crc kubenswrapper[5136]: I0320 08:55:45.397590 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:55:45 crc kubenswrapper[5136]: E0320 08:55:45.399181 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:55:51 crc kubenswrapper[5136]: I0320 08:55:51.184291 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.151:8443: connect: connection refused" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.413534 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469049 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469103 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f297l\" (UniqueName: \"kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469191 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469307 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469386 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469462 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.470284 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.470328 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs" (OuterVolumeSpecName: "logs") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.470890 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.475004 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l" (OuterVolumeSpecName: "kube-api-access-f297l") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "kube-api-access-f297l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.477344 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.492477 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data" (OuterVolumeSpecName: "config-data") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.507418 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts" (OuterVolumeSpecName: "scripts") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.507689 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.530624 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572374 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572403 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f297l\" (UniqueName: \"kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572413 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572424 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572432 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572440 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.050198 5136 generic.go:334] "Generic (PLEG): container finished" podID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerID="83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c" exitCode=137 Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.050246 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerDied","Data":"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c"} Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.050252 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.050275 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerDied","Data":"21c759da75cd0657471e5a123c71e4531a1ea5ab95f97bf42da6363bf11ca95c"} Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.050296 5136 scope.go:117] "RemoveContainer" containerID="87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.097213 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.105362 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.239689 5136 scope.go:117] "RemoveContainer" containerID="83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.264111 5136 scope.go:117] "RemoveContainer" containerID="87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041" Mar 20 08:55:56 crc kubenswrapper[5136]: E0320 08:55:56.264606 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041\": container with ID starting with 87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041 not found: ID does not exist" containerID="87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.264643 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041"} err="failed to get container status \"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041\": rpc error: code = NotFound desc = could not find container \"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041\": container with ID starting with 87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041 not found: ID does not exist" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.264668 5136 scope.go:117] "RemoveContainer" containerID="83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c" Mar 20 08:55:56 crc kubenswrapper[5136]: E0320 08:55:56.264902 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c\": container with ID starting with 83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c not found: ID does not exist" containerID="83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.264923 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c"} err="failed to get container status \"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c\": rpc error: code = NotFound desc = could not find container \"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c\": container with ID starting with 83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c not found: ID does not exist" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.410490 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" path="/var/lib/kubelet/pods/170b9fcc-77b0-41b5-8e99-cd95411287e9/volumes" Mar 20 08:55:59 crc kubenswrapper[5136]: I0320 08:55:59.400000 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:55:59 crc kubenswrapper[5136]: E0320 08:55:59.400711 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.148973 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566616-x4rw6"] Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149439 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149459 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149476 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149485 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149503 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149512 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149536 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149544 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149568 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149576 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149599 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149608 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149866 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149890 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149903 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149915 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149934 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149959 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.150671 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.152885 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.153150 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.154402 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.161141 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh59x\" (UniqueName: \"kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x\") pod \"auto-csr-approver-29566616-x4rw6\" (UID: \"0c085dee-ef7e-47eb-93aa-6ecf4d45030c\") " pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.163835 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566616-x4rw6"] Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.262624 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh59x\" (UniqueName: \"kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x\") pod \"auto-csr-approver-29566616-x4rw6\" (UID: \"0c085dee-ef7e-47eb-93aa-6ecf4d45030c\") " pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.281789 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh59x\" (UniqueName: \"kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x\") pod \"auto-csr-approver-29566616-x4rw6\" (UID: \"0c085dee-ef7e-47eb-93aa-6ecf4d45030c\") " pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.470716 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.888153 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566616-x4rw6"] Mar 20 08:56:01 crc kubenswrapper[5136]: I0320 08:56:01.103920 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" event={"ID":"0c085dee-ef7e-47eb-93aa-6ecf4d45030c","Type":"ContainerStarted","Data":"ae635ab104571e95e9b835d3a19f15c3dfb79921912d157ec9fb0776109cd976"} Mar 20 08:56:02 crc kubenswrapper[5136]: I0320 08:56:02.113590 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" event={"ID":"0c085dee-ef7e-47eb-93aa-6ecf4d45030c","Type":"ContainerStarted","Data":"fd6a9d8c42b14afc4c799021ad9e86afa50559ab2987d12455e53497c38a9c98"} Mar 20 08:56:02 crc kubenswrapper[5136]: I0320 08:56:02.127831 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" podStartSLOduration=1.224225847 podStartE2EDuration="2.127802662s" podCreationTimestamp="2026-03-20 08:56:00 +0000 UTC" firstStartedPulling="2026-03-20 08:56:00.889277357 +0000 UTC m=+7593.148588508" lastFinishedPulling="2026-03-20 08:56:01.792854172 +0000 UTC m=+7594.052165323" observedRunningTime="2026-03-20 08:56:02.12514767 +0000 UTC m=+7594.384458821" watchObservedRunningTime="2026-03-20 08:56:02.127802662 +0000 UTC m=+7594.387113813" Mar 20 08:56:03 crc kubenswrapper[5136]: I0320 08:56:03.124723 5136 generic.go:334] "Generic (PLEG): container finished" podID="0c085dee-ef7e-47eb-93aa-6ecf4d45030c" containerID="fd6a9d8c42b14afc4c799021ad9e86afa50559ab2987d12455e53497c38a9c98" exitCode=0 Mar 20 08:56:03 crc kubenswrapper[5136]: I0320 08:56:03.124801 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" event={"ID":"0c085dee-ef7e-47eb-93aa-6ecf4d45030c","Type":"ContainerDied","Data":"fd6a9d8c42b14afc4c799021ad9e86afa50559ab2987d12455e53497c38a9c98"} Mar 20 08:56:04 crc kubenswrapper[5136]: I0320 08:56:04.660783 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:04 crc kubenswrapper[5136]: I0320 08:56:04.848271 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh59x\" (UniqueName: \"kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x\") pod \"0c085dee-ef7e-47eb-93aa-6ecf4d45030c\" (UID: \"0c085dee-ef7e-47eb-93aa-6ecf4d45030c\") " Mar 20 08:56:04 crc kubenswrapper[5136]: I0320 08:56:04.856432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x" (OuterVolumeSpecName: "kube-api-access-gh59x") pod "0c085dee-ef7e-47eb-93aa-6ecf4d45030c" (UID: "0c085dee-ef7e-47eb-93aa-6ecf4d45030c"). InnerVolumeSpecName "kube-api-access-gh59x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:04 crc kubenswrapper[5136]: I0320 08:56:04.950050 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh59x\" (UniqueName: \"kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:05 crc kubenswrapper[5136]: I0320 08:56:05.142150 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" event={"ID":"0c085dee-ef7e-47eb-93aa-6ecf4d45030c","Type":"ContainerDied","Data":"ae635ab104571e95e9b835d3a19f15c3dfb79921912d157ec9fb0776109cd976"} Mar 20 08:56:05 crc kubenswrapper[5136]: I0320 08:56:05.142190 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae635ab104571e95e9b835d3a19f15c3dfb79921912d157ec9fb0776109cd976" Mar 20 08:56:05 crc kubenswrapper[5136]: I0320 08:56:05.142209 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:05 crc kubenswrapper[5136]: I0320 08:56:05.206609 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566610-hrt5r"] Mar 20 08:56:05 crc kubenswrapper[5136]: I0320 08:56:05.218288 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566610-hrt5r"] Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.410799 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e312a5ea-3b15-4c57-8b2d-613840a5d9ca" path="/var/lib/kubelet/pods/e312a5ea-3b15-4c57-8b2d-613840a5d9ca/volumes" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.454861 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 08:56:06 crc kubenswrapper[5136]: E0320 08:56:06.455239 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c085dee-ef7e-47eb-93aa-6ecf4d45030c" containerName="oc" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.455253 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c085dee-ef7e-47eb-93aa-6ecf4d45030c" containerName="oc" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.455440 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c085dee-ef7e-47eb-93aa-6ecf4d45030c" containerName="oc" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.456315 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.469389 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.580084 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.580222 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.581237 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slc4h\" (UniqueName: \"kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.581290 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.581390 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.581604 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.581730 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.682999 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slc4h\" (UniqueName: \"kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683051 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683076 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683118 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683149 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683190 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683227 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.684512 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.684974 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.686756 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.694789 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.697827 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.704681 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.704878 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slc4h\" (UniqueName: \"kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.777760 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:07 crc kubenswrapper[5136]: I0320 08:56:07.323210 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.168704 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-2bbqx"] Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.170598 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.172519 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerStarted","Data":"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d"} Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.172839 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerStarted","Data":"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa"} Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.172861 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerStarted","Data":"a94c046fe10c34e65503024040ef0cdbc5574ea11bd41c94fb47af937848986b"} Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.182985 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3e97-account-create-update-6qvr2"] Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.184550 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.186607 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.203101 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-2bbqx"] Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.207395 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3e97-account-create-update-6qvr2"] Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.228794 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-55ffc4694-d4d2v" podStartSLOduration=2.22876094 podStartE2EDuration="2.22876094s" podCreationTimestamp="2026-03-20 08:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:08.227745638 +0000 UTC m=+7600.487056789" watchObservedRunningTime="2026-03-20 08:56:08.22876094 +0000 UTC m=+7600.488072091" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.360540 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjgv\" (UniqueName: \"kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.361236 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8r7n\" (UniqueName: \"kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.361412 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.361506 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.463092 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8r7n\" (UniqueName: \"kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.463251 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.463304 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.463366 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnjgv\" (UniqueName: \"kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.464289 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.464311 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.492559 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8r7n\" (UniqueName: \"kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.492802 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnjgv\" (UniqueName: \"kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.493571 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.523769 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:09 crc kubenswrapper[5136]: I0320 08:56:09.091766 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-2bbqx"] Mar 20 08:56:09 crc kubenswrapper[5136]: I0320 08:56:09.152840 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3e97-account-create-update-6qvr2"] Mar 20 08:56:09 crc kubenswrapper[5136]: W0320 08:56:09.153127 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87521532_0534_4e37_9c80_809877f2a744.slice/crio-b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca WatchSource:0}: Error finding container b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca: Status 404 returned error can't find the container with id b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca Mar 20 08:56:09 crc kubenswrapper[5136]: I0320 08:56:09.185717 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2bbqx" event={"ID":"7f0f0206-8535-4184-ae20-349019be47b2","Type":"ContainerStarted","Data":"f0faad162fed620a9f564868892f2e09bf11b7882145cdf840c0cd841d342c8c"} Mar 20 08:56:09 crc kubenswrapper[5136]: I0320 08:56:09.187778 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3e97-account-create-update-6qvr2" event={"ID":"87521532-0534-4e37-9c80-809877f2a744","Type":"ContainerStarted","Data":"b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca"} Mar 20 08:56:10 crc kubenswrapper[5136]: I0320 08:56:10.257609 5136 generic.go:334] "Generic (PLEG): container finished" podID="87521532-0534-4e37-9c80-809877f2a744" containerID="bd3d02ee4935523ab4eb4492588717b04d2271f1f22be17fbab8ebb01a7e4c49" exitCode=0 Mar 20 08:56:10 crc kubenswrapper[5136]: I0320 08:56:10.258488 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3e97-account-create-update-6qvr2" event={"ID":"87521532-0534-4e37-9c80-809877f2a744","Type":"ContainerDied","Data":"bd3d02ee4935523ab4eb4492588717b04d2271f1f22be17fbab8ebb01a7e4c49"} Mar 20 08:56:10 crc kubenswrapper[5136]: I0320 08:56:10.268782 5136 generic.go:334] "Generic (PLEG): container finished" podID="7f0f0206-8535-4184-ae20-349019be47b2" containerID="0596189127fdfe0bb4f8c43c9a281f3d0d01a460eb398984e9cddcf692a4beaa" exitCode=0 Mar 20 08:56:10 crc kubenswrapper[5136]: I0320 08:56:10.268839 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2bbqx" event={"ID":"7f0f0206-8535-4184-ae20-349019be47b2","Type":"ContainerDied","Data":"0596189127fdfe0bb4f8c43c9a281f3d0d01a460eb398984e9cddcf692a4beaa"} Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.637435 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.771295 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts\") pod \"87521532-0534-4e37-9c80-809877f2a744\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.771452 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8r7n\" (UniqueName: \"kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n\") pod \"87521532-0534-4e37-9c80-809877f2a744\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.771938 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87521532-0534-4e37-9c80-809877f2a744" (UID: "87521532-0534-4e37-9c80-809877f2a744"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.772176 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.777131 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n" (OuterVolumeSpecName: "kube-api-access-x8r7n") pod "87521532-0534-4e37-9c80-809877f2a744" (UID: "87521532-0534-4e37-9c80-809877f2a744"). InnerVolumeSpecName "kube-api-access-x8r7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.840690 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.873498 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8r7n\" (UniqueName: \"kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.974471 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnjgv\" (UniqueName: \"kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv\") pod \"7f0f0206-8535-4184-ae20-349019be47b2\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.974575 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts\") pod \"7f0f0206-8535-4184-ae20-349019be47b2\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.975031 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f0f0206-8535-4184-ae20-349019be47b2" (UID: "7f0f0206-8535-4184-ae20-349019be47b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.977321 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv" (OuterVolumeSpecName: "kube-api-access-cnjgv") pod "7f0f0206-8535-4184-ae20-349019be47b2" (UID: "7f0f0206-8535-4184-ae20-349019be47b2"). InnerVolumeSpecName "kube-api-access-cnjgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.076617 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnjgv\" (UniqueName: \"kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.076668 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.296297 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2bbqx" event={"ID":"7f0f0206-8535-4184-ae20-349019be47b2","Type":"ContainerDied","Data":"f0faad162fed620a9f564868892f2e09bf11b7882145cdf840c0cd841d342c8c"} Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.296333 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0faad162fed620a9f564868892f2e09bf11b7882145cdf840c0cd841d342c8c" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.296394 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.297882 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3e97-account-create-update-6qvr2" event={"ID":"87521532-0534-4e37-9c80-809877f2a744","Type":"ContainerDied","Data":"b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca"} Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.297910 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.297955 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.396072 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-gnx9m"] Mar 20 08:56:13 crc kubenswrapper[5136]: E0320 08:56:13.397165 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0f0206-8535-4184-ae20-349019be47b2" containerName="mariadb-database-create" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.397186 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0f0206-8535-4184-ae20-349019be47b2" containerName="mariadb-database-create" Mar 20 08:56:13 crc kubenswrapper[5136]: E0320 08:56:13.397205 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87521532-0534-4e37-9c80-809877f2a744" containerName="mariadb-account-create-update" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.397213 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="87521532-0534-4e37-9c80-809877f2a744" containerName="mariadb-account-create-update" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.397424 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f0f0206-8535-4184-ae20-349019be47b2" containerName="mariadb-database-create" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.397471 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="87521532-0534-4e37-9c80-809877f2a744" containerName="mariadb-account-create-update" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.398782 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.404790 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gs9jr" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.405761 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.414154 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gnx9m"] Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.503385 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.503486 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.503611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7kdb\" (UniqueName: \"kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.605846 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.605933 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.606098 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7kdb\" (UniqueName: \"kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.611627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.611702 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.625574 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7kdb\" (UniqueName: \"kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.717594 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:14 crc kubenswrapper[5136]: I0320 08:56:14.190531 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gnx9m"] Mar 20 08:56:14 crc kubenswrapper[5136]: I0320 08:56:14.326548 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gnx9m" event={"ID":"d5f2ce8c-5295-423c-a81f-511d7abd0495","Type":"ContainerStarted","Data":"30fd3cb470777b59d4e09990972172f0a948ce7612bcf75722bfae72d9fee57e"} Mar 20 08:56:14 crc kubenswrapper[5136]: I0320 08:56:14.397242 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:56:14 crc kubenswrapper[5136]: E0320 08:56:14.397486 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:56:16 crc kubenswrapper[5136]: I0320 08:56:16.778132 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:16 crc kubenswrapper[5136]: I0320 08:56:16.778400 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:23 crc kubenswrapper[5136]: I0320 08:56:23.444165 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gnx9m" event={"ID":"d5f2ce8c-5295-423c-a81f-511d7abd0495","Type":"ContainerStarted","Data":"f6175692bafced85aff7b1e4e0d62331fe62665eef24726a05ac9691debbacde"} Mar 20 08:56:23 crc kubenswrapper[5136]: I0320 08:56:23.476673 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-gnx9m" podStartSLOduration=2.076991063 podStartE2EDuration="10.476654289s" podCreationTimestamp="2026-03-20 08:56:13 +0000 UTC" firstStartedPulling="2026-03-20 08:56:14.195768471 +0000 UTC m=+7606.455079622" lastFinishedPulling="2026-03-20 08:56:22.595431707 +0000 UTC m=+7614.854742848" observedRunningTime="2026-03-20 08:56:23.469058405 +0000 UTC m=+7615.728369566" watchObservedRunningTime="2026-03-20 08:56:23.476654289 +0000 UTC m=+7615.735965440" Mar 20 08:56:24 crc kubenswrapper[5136]: I0320 08:56:24.452596 5136 generic.go:334] "Generic (PLEG): container finished" podID="d5f2ce8c-5295-423c-a81f-511d7abd0495" containerID="f6175692bafced85aff7b1e4e0d62331fe62665eef24726a05ac9691debbacde" exitCode=0 Mar 20 08:56:24 crc kubenswrapper[5136]: I0320 08:56:24.452690 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gnx9m" event={"ID":"d5f2ce8c-5295-423c-a81f-511d7abd0495","Type":"ContainerDied","Data":"f6175692bafced85aff7b1e4e0d62331fe62665eef24726a05ac9691debbacde"} Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.886917 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.964967 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7kdb\" (UniqueName: \"kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb\") pod \"d5f2ce8c-5295-423c-a81f-511d7abd0495\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.965358 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data\") pod \"d5f2ce8c-5295-423c-a81f-511d7abd0495\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.965426 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle\") pod \"d5f2ce8c-5295-423c-a81f-511d7abd0495\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.971085 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb" (OuterVolumeSpecName: "kube-api-access-p7kdb") pod "d5f2ce8c-5295-423c-a81f-511d7abd0495" (UID: "d5f2ce8c-5295-423c-a81f-511d7abd0495"). InnerVolumeSpecName "kube-api-access-p7kdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.993798 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5f2ce8c-5295-423c-a81f-511d7abd0495" (UID: "d5f2ce8c-5295-423c-a81f-511d7abd0495"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.042782 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data" (OuterVolumeSpecName: "config-data") pod "d5f2ce8c-5295-423c-a81f-511d7abd0495" (UID: "d5f2ce8c-5295-423c-a81f-511d7abd0495"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.067674 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7kdb\" (UniqueName: \"kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.067705 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.067715 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.475783 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gnx9m" event={"ID":"d5f2ce8c-5295-423c-a81f-511d7abd0495","Type":"ContainerDied","Data":"30fd3cb470777b59d4e09990972172f0a948ce7612bcf75722bfae72d9fee57e"} Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.475843 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30fd3cb470777b59d4e09990972172f0a948ce7612bcf75722bfae72d9fee57e" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.475914 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.780515 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8443: connect: connection refused" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.249678 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:56:28 crc kubenswrapper[5136]: E0320 08:56:28.250627 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f2ce8c-5295-423c-a81f-511d7abd0495" containerName="heat-db-sync" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.250643 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f2ce8c-5295-423c-a81f-511d7abd0495" containerName="heat-db-sync" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.250842 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f2ce8c-5295-423c-a81f-511d7abd0495" containerName="heat-db-sync" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.251501 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.255178 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gs9jr" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.255251 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.286099 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.287954 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.320088 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6lz\" (UniqueName: \"kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.320253 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.320361 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.320434 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.428829 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.428942 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c6lz\" (UniqueName: \"kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.429067 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.429203 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.438751 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.441101 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.441477 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.470719 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c6lz\" (UniqueName: \"kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.476406 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.485013 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.491636 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.508872 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.523877 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.525693 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.530350 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.532189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.532404 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.532436 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.532508 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txf47\" (UniqueName: \"kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.545901 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.587712 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gs9jr" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.596367 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.637983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638396 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638426 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638484 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kms9\" (UniqueName: \"kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638513 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txf47\" (UniqueName: \"kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638580 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638630 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.639992 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.651742 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.652116 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.652166 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.668274 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txf47\" (UniqueName: \"kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.746133 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.746303 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kms9\" (UniqueName: \"kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.746533 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.746610 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.757008 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.773115 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.776882 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kms9\" (UniqueName: \"kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.780341 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.860986 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.878317 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.181278 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.397416 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:56:29 crc kubenswrapper[5136]: E0320 08:56:29.397984 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:56:29 crc kubenswrapper[5136]: W0320 08:56:29.480202 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa3f75f3_0821_4381_9b69_18074378cbf3.slice/crio-d6453cc156fe8ce41d248960559aabcd01f5f57bed73da8f8701ea242043e061 WatchSource:0}: Error finding container d6453cc156fe8ce41d248960559aabcd01f5f57bed73da8f8701ea242043e061: Status 404 returned error can't find the container with id d6453cc156fe8ce41d248960559aabcd01f5f57bed73da8f8701ea242043e061 Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.491949 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.532059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" event={"ID":"fa3f75f3-0821-4381-9b69-18074378cbf3","Type":"ContainerStarted","Data":"d6453cc156fe8ce41d248960559aabcd01f5f57bed73da8f8701ea242043e061"} Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.534046 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5644df8c69-t5dqn" event={"ID":"cdfda925-e99e-45fc-9fe8-c91b77e3179e","Type":"ContainerStarted","Data":"8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c"} Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.534101 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5644df8c69-t5dqn" event={"ID":"cdfda925-e99e-45fc-9fe8-c91b77e3179e","Type":"ContainerStarted","Data":"fb162b157e354506481d1c7139390a7d3e392195416ceb14e79870cc4731ee73"} Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.534209 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.572512 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5644df8c69-t5dqn" podStartSLOduration=1.572478439 podStartE2EDuration="1.572478439s" podCreationTimestamp="2026-03-20 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:29.566809903 +0000 UTC m=+7621.826121064" watchObservedRunningTime="2026-03-20 08:56:29.572478439 +0000 UTC m=+7621.831789590" Mar 20 08:56:29 crc kubenswrapper[5136]: W0320 08:56:29.597949 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod934aadcd_ca9b_42cf_a5a8_4474010a97a7.slice/crio-1e15cef1a88e3eca46060f66b63f368657a1520f8f3e2b59b00a1d4191407743 WatchSource:0}: Error finding container 1e15cef1a88e3eca46060f66b63f368657a1520f8f3e2b59b00a1d4191407743: Status 404 returned error can't find the container with id 1e15cef1a88e3eca46060f66b63f368657a1520f8f3e2b59b00a1d4191407743 Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.600829 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:30 crc kubenswrapper[5136]: I0320 08:56:30.553849 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7cd845d9cb-blq7p" event={"ID":"934aadcd-ca9b-42cf-a5a8-4474010a97a7","Type":"ContainerStarted","Data":"1e15cef1a88e3eca46060f66b63f368657a1520f8f3e2b59b00a1d4191407743"} Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.571162 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" event={"ID":"fa3f75f3-0821-4381-9b69-18074378cbf3","Type":"ContainerStarted","Data":"551cf6455a7f1a2c0342803fffb1926a0492784af45f7717333269d72d5d5b43"} Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.572192 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.573276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7cd845d9cb-blq7p" event={"ID":"934aadcd-ca9b-42cf-a5a8-4474010a97a7","Type":"ContainerStarted","Data":"8948947ccc38fddba8f1e4bf60ba5283b5e0ca552cec8c2bcd0f1ef15f82e469"} Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.573519 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.590972 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" podStartSLOduration=2.315507643 podStartE2EDuration="4.590954812s" podCreationTimestamp="2026-03-20 08:56:28 +0000 UTC" firstStartedPulling="2026-03-20 08:56:29.482679238 +0000 UTC m=+7621.741990389" lastFinishedPulling="2026-03-20 08:56:31.758126407 +0000 UTC m=+7624.017437558" observedRunningTime="2026-03-20 08:56:32.587846715 +0000 UTC m=+7624.847157856" watchObservedRunningTime="2026-03-20 08:56:32.590954812 +0000 UTC m=+7624.850265963" Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.618380 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7cd845d9cb-blq7p" podStartSLOduration=2.460320267 podStartE2EDuration="4.61835586s" podCreationTimestamp="2026-03-20 08:56:28 +0000 UTC" firstStartedPulling="2026-03-20 08:56:29.600970271 +0000 UTC m=+7621.860281422" lastFinishedPulling="2026-03-20 08:56:31.759005864 +0000 UTC m=+7624.018317015" observedRunningTime="2026-03-20 08:56:32.61122243 +0000 UTC m=+7624.870533591" watchObservedRunningTime="2026-03-20 08:56:32.61835586 +0000 UTC m=+7624.877667011" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.264293 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.271228 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.286265 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.295720 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.297095 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.311143 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.312352 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.319881 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.319942 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhkvs\" (UniqueName: \"kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.319996 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.320068 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.321530 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.331188 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421258 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421310 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421341 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421357 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgkb\" (UniqueName: \"kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421411 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421436 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421452 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45cv\" (UniqueName: \"kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421477 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421497 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421544 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421655 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhkvs\" (UniqueName: \"kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421675 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.428027 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.428490 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.429751 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.440531 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhkvs\" (UniqueName: \"kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523539 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523604 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523629 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqgkb\" (UniqueName: \"kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523727 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523766 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523787 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h45cv\" (UniqueName: \"kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523844 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523966 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.528043 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.528244 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.529132 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.530606 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.531680 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.532643 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.542043 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqgkb\" (UniqueName: \"kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.542641 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45cv\" (UniqueName: \"kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.587713 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.620110 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.639960 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:36 crc kubenswrapper[5136]: W0320 08:56:36.048176 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53ac16e5_846e_40c1_a361_0815d231345a.slice/crio-d50b7d1e3cf945251d9601fa11e23c7c876806ca081f20f39fb0f6c33187004b WatchSource:0}: Error finding container d50b7d1e3cf945251d9601fa11e23c7c876806ca081f20f39fb0f6c33187004b: Status 404 returned error can't find the container with id d50b7d1e3cf945251d9601fa11e23c7c876806ca081f20f39fb0f6c33187004b Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.049912 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.184017 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.197780 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:36 crc kubenswrapper[5136]: W0320 08:56:36.199151 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525e3cbe_0215_4bd0_b835_95c6d6001b9a.slice/crio-0117f8454b4b92087fd6d028cc295e56c9566ccd0cea7c0fc8c9dfc4ba503aa1 WatchSource:0}: Error finding container 0117f8454b4b92087fd6d028cc295e56c9566ccd0cea7c0fc8c9dfc4ba503aa1: Status 404 returned error can't find the container with id 0117f8454b4b92087fd6d028cc295e56c9566ccd0cea7c0fc8c9dfc4ba503aa1 Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.338668 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.338930 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7cd845d9cb-blq7p" podUID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" containerName="heat-api" containerID="cri-o://8948947ccc38fddba8f1e4bf60ba5283b5e0ca552cec8c2bcd0f1ef15f82e469" gracePeriod=60 Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.347664 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.347883 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" podUID="fa3f75f3-0821-4381-9b69-18074378cbf3" containerName="heat-cfnapi" containerID="cri-o://551cf6455a7f1a2c0342803fffb1926a0492784af45f7717333269d72d5d5b43" gracePeriod=60 Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.396484 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.398382 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.401415 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.401601 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.449067 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.449357 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.449421 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.449603 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrlc\" (UniqueName: \"kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.450384 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.450403 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.450471 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.449623 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.451404 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.451517 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.456028 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.456532 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.553291 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrrlc\" (UniqueName: \"kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554016 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554045 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554107 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grml6\" (UniqueName: \"kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554168 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554227 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554257 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554275 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554305 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554326 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554398 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.560567 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.560600 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.560645 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.572226 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.572358 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.573102 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrrlc\" (UniqueName: \"kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.611144 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7659754fcd-klwkv" event={"ID":"53ac16e5-846e-40c1-a361-0815d231345a","Type":"ContainerStarted","Data":"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6"} Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.611203 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7659754fcd-klwkv" event={"ID":"53ac16e5-846e-40c1-a361-0815d231345a","Type":"ContainerStarted","Data":"d50b7d1e3cf945251d9601fa11e23c7c876806ca081f20f39fb0f6c33187004b"} Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.611393 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.613305 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697448b746-tf7pw" event={"ID":"525e3cbe-0215-4bd0-b835-95c6d6001b9a","Type":"ContainerStarted","Data":"0117f8454b4b92087fd6d028cc295e56c9566ccd0cea7c0fc8c9dfc4ba503aa1"} Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.614956 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86cfd8fb5c-2kxss" event={"ID":"51d025cc-fa04-4871-a937-d0967d7aecf8","Type":"ContainerStarted","Data":"422aee8174cf59684d77d072b55a1c197dd730359392605486620f801d1fb15a"} Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.614982 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86cfd8fb5c-2kxss" event={"ID":"51d025cc-fa04-4871-a937-d0967d7aecf8","Type":"ContainerStarted","Data":"39980fbfc379985ed683289951f5c79560baffaa957735d8c0f526ef670d64d1"} Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.630588 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7659754fcd-klwkv" podStartSLOduration=1.63057073 podStartE2EDuration="1.63057073s" podCreationTimestamp="2026-03-20 08:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:36.62670808 +0000 UTC m=+7628.886019241" watchObservedRunningTime="2026-03-20 08:56:36.63057073 +0000 UTC m=+7628.889881881" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658484 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658564 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658677 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658752 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658859 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grml6\" (UniqueName: \"kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658920 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.663726 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.664377 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.664501 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.674902 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.713204 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grml6\" (UniqueName: \"kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.719780 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.815302 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.832870 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.317306 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.438025 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.676599 5136 generic.go:334] "Generic (PLEG): container finished" podID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerID="422aee8174cf59684d77d072b55a1c197dd730359392605486620f801d1fb15a" exitCode=1 Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.677031 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86cfd8fb5c-2kxss" event={"ID":"51d025cc-fa04-4871-a937-d0967d7aecf8","Type":"ContainerDied","Data":"422aee8174cf59684d77d072b55a1c197dd730359392605486620f801d1fb15a"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.677255 5136 scope.go:117] "RemoveContainer" containerID="422aee8174cf59684d77d072b55a1c197dd730359392605486620f801d1fb15a" Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.682428 5136 generic.go:334] "Generic (PLEG): container finished" podID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerID="d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde" exitCode=1 Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.682899 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697448b746-tf7pw" event={"ID":"525e3cbe-0215-4bd0-b835-95c6d6001b9a","Type":"ContainerDied","Data":"d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.683959 5136 scope.go:117] "RemoveContainer" containerID="d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde" Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.684697 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" event={"ID":"25dc915a-6dbf-4622-bd14-1b372cfe9acc","Type":"ContainerStarted","Data":"3210b5910d0a16311abb43994ee90a4669047a646cbfeed19742a3d4c20fe707"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.688555 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dbf74ffb7-gw5nj" event={"ID":"d397a968-433e-4de9-8ed7-d0247aa5e775","Type":"ContainerStarted","Data":"ca4ec121b137203fb91f384175b7088e11aa189eac35f5c700b19c4a087e9179"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.706297 5136 generic.go:334] "Generic (PLEG): container finished" podID="fa3f75f3-0821-4381-9b69-18074378cbf3" containerID="551cf6455a7f1a2c0342803fffb1926a0492784af45f7717333269d72d5d5b43" exitCode=0 Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.706406 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" event={"ID":"fa3f75f3-0821-4381-9b69-18074378cbf3","Type":"ContainerDied","Data":"551cf6455a7f1a2c0342803fffb1926a0492784af45f7717333269d72d5d5b43"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.724964 5136 generic.go:334] "Generic (PLEG): container finished" podID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" containerID="8948947ccc38fddba8f1e4bf60ba5283b5e0ca552cec8c2bcd0f1ef15f82e469" exitCode=0 Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.726267 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7cd845d9cb-blq7p" event={"ID":"934aadcd-ca9b-42cf-a5a8-4474010a97a7","Type":"ContainerDied","Data":"8948947ccc38fddba8f1e4bf60ba5283b5e0ca552cec8c2bcd0f1ef15f82e469"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.977371 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.023411 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:38 crc kubenswrapper[5136]: E0320 08:56:38.029455 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525e3cbe_0215_4bd0_b835_95c6d6001b9a.slice/crio-conmon-d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525e3cbe_0215_4bd0_b835_95c6d6001b9a.slice/crio-d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde.scope\": RecentStats: unable to find data in memory cache]" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096377 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data\") pod \"fa3f75f3-0821-4381-9b69-18074378cbf3\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096462 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kms9\" (UniqueName: \"kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9\") pod \"fa3f75f3-0821-4381-9b69-18074378cbf3\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096481 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom\") pod \"fa3f75f3-0821-4381-9b69-18074378cbf3\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096528 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom\") pod \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096595 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data\") pod \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096633 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle\") pod \"fa3f75f3-0821-4381-9b69-18074378cbf3\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096680 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txf47\" (UniqueName: \"kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47\") pod \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096721 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle\") pod \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.128061 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "934aadcd-ca9b-42cf-a5a8-4474010a97a7" (UID: "934aadcd-ca9b-42cf-a5a8-4474010a97a7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.199636 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.267823 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9" (OuterVolumeSpecName: "kube-api-access-8kms9") pod "fa3f75f3-0821-4381-9b69-18074378cbf3" (UID: "fa3f75f3-0821-4381-9b69-18074378cbf3"). InnerVolumeSpecName "kube-api-access-8kms9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.267973 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47" (OuterVolumeSpecName: "kube-api-access-txf47") pod "934aadcd-ca9b-42cf-a5a8-4474010a97a7" (UID: "934aadcd-ca9b-42cf-a5a8-4474010a97a7"). InnerVolumeSpecName "kube-api-access-txf47". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.295983 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa3f75f3-0821-4381-9b69-18074378cbf3" (UID: "fa3f75f3-0821-4381-9b69-18074378cbf3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.302333 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kms9\" (UniqueName: \"kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.302380 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.302394 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txf47\" (UniqueName: \"kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.307999 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa3f75f3-0821-4381-9b69-18074378cbf3" (UID: "fa3f75f3-0821-4381-9b69-18074378cbf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.328329 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "934aadcd-ca9b-42cf-a5a8-4474010a97a7" (UID: "934aadcd-ca9b-42cf-a5a8-4474010a97a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.372114 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data" (OuterVolumeSpecName: "config-data") pod "fa3f75f3-0821-4381-9b69-18074378cbf3" (UID: "fa3f75f3-0821-4381-9b69-18074378cbf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.386260 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data" (OuterVolumeSpecName: "config-data") pod "934aadcd-ca9b-42cf-a5a8-4474010a97a7" (UID: "934aadcd-ca9b-42cf-a5a8-4474010a97a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.404431 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.404480 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.404493 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.404505 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.740743 5136 generic.go:334] "Generic (PLEG): container finished" podID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerID="b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb" exitCode=1 Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.740797 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697448b746-tf7pw" event={"ID":"525e3cbe-0215-4bd0-b835-95c6d6001b9a","Type":"ContainerDied","Data":"b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.741020 5136 scope.go:117] "RemoveContainer" containerID="d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.742105 5136 scope.go:117] "RemoveContainer" containerID="b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb" Mar 20 08:56:38 crc kubenswrapper[5136]: E0320 08:56:38.742522 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-697448b746-tf7pw_openstack(525e3cbe-0215-4bd0-b835-95c6d6001b9a)\"" pod="openstack/heat-cfnapi-697448b746-tf7pw" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.750577 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" event={"ID":"25dc915a-6dbf-4622-bd14-1b372cfe9acc","Type":"ContainerStarted","Data":"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.750732 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.760393 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dbf74ffb7-gw5nj" event={"ID":"d397a968-433e-4de9-8ed7-d0247aa5e775","Type":"ContainerStarted","Data":"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.760462 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.786034 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" event={"ID":"fa3f75f3-0821-4381-9b69-18074378cbf3","Type":"ContainerDied","Data":"d6453cc156fe8ce41d248960559aabcd01f5f57bed73da8f8701ea242043e061"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.786135 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.796475 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7cd845d9cb-blq7p" event={"ID":"934aadcd-ca9b-42cf-a5a8-4474010a97a7","Type":"ContainerDied","Data":"1e15cef1a88e3eca46060f66b63f368657a1520f8f3e2b59b00a1d4191407743"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.796591 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.834258 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" podStartSLOduration=2.834230866 podStartE2EDuration="2.834230866s" podCreationTimestamp="2026-03-20 08:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:38.805432124 +0000 UTC m=+7631.064743285" watchObservedRunningTime="2026-03-20 08:56:38.834230866 +0000 UTC m=+7631.093542017" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.856442 5136 generic.go:334] "Generic (PLEG): container finished" podID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerID="99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b" exitCode=1 Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.857538 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86cfd8fb5c-2kxss" event={"ID":"51d025cc-fa04-4871-a937-d0967d7aecf8","Type":"ContainerDied","Data":"99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.858032 5136 scope.go:117] "RemoveContainer" containerID="99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b" Mar 20 08:56:38 crc kubenswrapper[5136]: E0320 08:56:38.858727 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-86cfd8fb5c-2kxss_openstack(51d025cc-fa04-4871-a937-d0967d7aecf8)\"" pod="openstack/heat-api-86cfd8fb5c-2kxss" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.887787 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7dbf74ffb7-gw5nj" podStartSLOduration=2.887765583 podStartE2EDuration="2.887765583s" podCreationTimestamp="2026-03-20 08:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:38.8369689 +0000 UTC m=+7631.096280051" watchObservedRunningTime="2026-03-20 08:56:38.887765583 +0000 UTC m=+7631.147076734" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.909587 5136 scope.go:117] "RemoveContainer" containerID="551cf6455a7f1a2c0342803fffb1926a0492784af45f7717333269d72d5d5b43" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.935666 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.945955 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.946244 5136 scope.go:117] "RemoveContainer" containerID="8948947ccc38fddba8f1e4bf60ba5283b5e0ca552cec8c2bcd0f1ef15f82e469" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.966981 5136 scope.go:117] "RemoveContainer" containerID="422aee8174cf59684d77d072b55a1c197dd730359392605486620f801d1fb15a" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.973458 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.983085 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:39 crc kubenswrapper[5136]: I0320 08:56:39.870614 5136 scope.go:117] "RemoveContainer" containerID="99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b" Mar 20 08:56:39 crc kubenswrapper[5136]: E0320 08:56:39.871229 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-86cfd8fb5c-2kxss_openstack(51d025cc-fa04-4871-a937-d0967d7aecf8)\"" pod="openstack/heat-api-86cfd8fb5c-2kxss" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" Mar 20 08:56:39 crc kubenswrapper[5136]: I0320 08:56:39.874451 5136 scope.go:117] "RemoveContainer" containerID="b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb" Mar 20 08:56:39 crc kubenswrapper[5136]: E0320 08:56:39.874855 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-697448b746-tf7pw_openstack(525e3cbe-0215-4bd0-b835-95c6d6001b9a)\"" pod="openstack/heat-cfnapi-697448b746-tf7pw" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" Mar 20 08:56:39 crc kubenswrapper[5136]: I0320 08:56:39.878911 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:39 crc kubenswrapper[5136]: I0320 08:56:39.979020 5136 scope.go:117] "RemoveContainer" containerID="482a97e7c7d9733c356a59b74d29e1b51c08c0378829f0707d6918c34c51d893" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.408370 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" path="/var/lib/kubelet/pods/934aadcd-ca9b-42cf-a5a8-4474010a97a7/volumes" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.408944 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa3f75f3-0821-4381-9b69-18074378cbf3" path="/var/lib/kubelet/pods/fa3f75f3-0821-4381-9b69-18074378cbf3/volumes" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.620431 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.620753 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.641132 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.641169 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.881354 5136 scope.go:117] "RemoveContainer" containerID="b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.881505 5136 scope.go:117] "RemoveContainer" containerID="99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b" Mar 20 08:56:40 crc kubenswrapper[5136]: E0320 08:56:40.881709 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-697448b746-tf7pw_openstack(525e3cbe-0215-4bd0-b835-95c6d6001b9a)\"" pod="openstack/heat-cfnapi-697448b746-tf7pw" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" Mar 20 08:56:40 crc kubenswrapper[5136]: E0320 08:56:40.881717 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-86cfd8fb5c-2kxss_openstack(51d025cc-fa04-4871-a937-d0967d7aecf8)\"" pod="openstack/heat-api-86cfd8fb5c-2kxss" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" Mar 20 08:56:41 crc kubenswrapper[5136]: I0320 08:56:41.397046 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:56:41 crc kubenswrapper[5136]: E0320 08:56:41.397393 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:56:41 crc kubenswrapper[5136]: I0320 08:56:41.588308 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:41 crc kubenswrapper[5136]: I0320 08:56:41.652074 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:56:41 crc kubenswrapper[5136]: I0320 08:56:41.652306 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon-log" containerID="cri-o://2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733" gracePeriod=30 Mar 20 08:56:41 crc kubenswrapper[5136]: I0320 08:56:41.652950 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" containerID="cri-o://60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0" gracePeriod=30 Mar 20 08:56:45 crc kubenswrapper[5136]: I0320 08:56:45.929869 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerDied","Data":"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0"} Mar 20 08:56:45 crc kubenswrapper[5136]: I0320 08:56:45.929890 5136 generic.go:334] "Generic (PLEG): container finished" podID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerID="60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0" exitCode=0 Mar 20 08:56:48 crc kubenswrapper[5136]: I0320 08:56:48.155166 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:48 crc kubenswrapper[5136]: I0320 08:56:48.189341 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:48 crc kubenswrapper[5136]: I0320 08:56:48.241437 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:48 crc kubenswrapper[5136]: I0320 08:56:48.281193 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:48 crc kubenswrapper[5136]: I0320 08:56:48.971217 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.377226 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.395014 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.522666 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle\") pod \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.522725 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom\") pod \"51d025cc-fa04-4871-a937-d0967d7aecf8\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.522749 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom\") pod \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.522805 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data\") pod \"51d025cc-fa04-4871-a937-d0967d7aecf8\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.522892 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data\") pod \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.523077 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqgkb\" (UniqueName: \"kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb\") pod \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.523180 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle\") pod \"51d025cc-fa04-4871-a937-d0967d7aecf8\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.523226 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h45cv\" (UniqueName: \"kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv\") pod \"51d025cc-fa04-4871-a937-d0967d7aecf8\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.529412 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "525e3cbe-0215-4bd0-b835-95c6d6001b9a" (UID: "525e3cbe-0215-4bd0-b835-95c6d6001b9a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.530001 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb" (OuterVolumeSpecName: "kube-api-access-wqgkb") pod "525e3cbe-0215-4bd0-b835-95c6d6001b9a" (UID: "525e3cbe-0215-4bd0-b835-95c6d6001b9a"). InnerVolumeSpecName "kube-api-access-wqgkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.531942 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "51d025cc-fa04-4871-a937-d0967d7aecf8" (UID: "51d025cc-fa04-4871-a937-d0967d7aecf8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.532867 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv" (OuterVolumeSpecName: "kube-api-access-h45cv") pod "51d025cc-fa04-4871-a937-d0967d7aecf8" (UID: "51d025cc-fa04-4871-a937-d0967d7aecf8"). InnerVolumeSpecName "kube-api-access-h45cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.556251 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51d025cc-fa04-4871-a937-d0967d7aecf8" (UID: "51d025cc-fa04-4871-a937-d0967d7aecf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.564227 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "525e3cbe-0215-4bd0-b835-95c6d6001b9a" (UID: "525e3cbe-0215-4bd0-b835-95c6d6001b9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.584545 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data" (OuterVolumeSpecName: "config-data") pod "525e3cbe-0215-4bd0-b835-95c6d6001b9a" (UID: "525e3cbe-0215-4bd0-b835-95c6d6001b9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.595626 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data" (OuterVolumeSpecName: "config-data") pod "51d025cc-fa04-4871-a937-d0967d7aecf8" (UID: "51d025cc-fa04-4871-a937-d0967d7aecf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625285 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqgkb\" (UniqueName: \"kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625363 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625378 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h45cv\" (UniqueName: \"kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625389 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625429 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625442 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625453 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625468 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.984665 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.984659 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697448b746-tf7pw" event={"ID":"525e3cbe-0215-4bd0-b835-95c6d6001b9a","Type":"ContainerDied","Data":"0117f8454b4b92087fd6d028cc295e56c9566ccd0cea7c0fc8c9dfc4ba503aa1"} Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.985086 5136 scope.go:117] "RemoveContainer" containerID="b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.986356 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86cfd8fb5c-2kxss" event={"ID":"51d025cc-fa04-4871-a937-d0967d7aecf8","Type":"ContainerDied","Data":"39980fbfc379985ed683289951f5c79560baffaa957735d8c0f526ef670d64d1"} Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.986393 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.020345 5136 scope.go:117] "RemoveContainer" containerID="99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b" Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.021520 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.038186 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.047170 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.057477 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.407602 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" path="/var/lib/kubelet/pods/51d025cc-fa04-4871-a937-d0967d7aecf8/volumes" Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.408129 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" path="/var/lib/kubelet/pods/525e3cbe-0215-4bd0-b835-95c6d6001b9a/volumes" Mar 20 08:56:51 crc kubenswrapper[5136]: I0320 08:56:51.416908 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.152:8443: connect: connection refused" Mar 20 08:56:52 crc kubenswrapper[5136]: I0320 08:56:52.397412 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:56:52 crc kubenswrapper[5136]: E0320 08:56:52.397743 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:56:55 crc kubenswrapper[5136]: I0320 08:56:55.634846 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:55 crc kubenswrapper[5136]: I0320 08:56:55.690226 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:56:55 crc kubenswrapper[5136]: I0320 08:56:55.690446 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5644df8c69-t5dqn" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerName="heat-engine" containerID="cri-o://8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" gracePeriod=60 Mar 20 08:56:58 crc kubenswrapper[5136]: E0320 08:56:58.615493 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 08:56:58 crc kubenswrapper[5136]: E0320 08:56:58.641364 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 08:56:58 crc kubenswrapper[5136]: E0320 08:56:58.643614 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 08:56:58 crc kubenswrapper[5136]: E0320 08:56:58.643652 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5644df8c69-t5dqn" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerName="heat-engine" Mar 20 08:57:01 crc kubenswrapper[5136]: I0320 08:57:01.416979 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.152:8443: connect: connection refused" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.055075 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-gqtht"] Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.064656 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-gqtht"] Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.080109 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-24c6-account-create-update-625nw"] Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.093684 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-24c6-account-create-update-625nw"] Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.096106 5136 generic.go:334] "Generic (PLEG): container finished" podID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerID="8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" exitCode=0 Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.096141 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5644df8c69-t5dqn" event={"ID":"cdfda925-e99e-45fc-9fe8-c91b77e3179e","Type":"ContainerDied","Data":"8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c"} Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.322425 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.411241 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c8bf45-d717-45f4-9679-7f6b69835f8a" path="/var/lib/kubelet/pods/06c8bf45-d717-45f4-9679-7f6b69835f8a/volumes" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.413412 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4aab638-4f7d-46a0-bc82-10fe569b56db" path="/var/lib/kubelet/pods/a4aab638-4f7d-46a0-bc82-10fe569b56db/volumes" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.468903 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle\") pod \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.469137 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c6lz\" (UniqueName: \"kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz\") pod \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.469520 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data\") pod \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.469606 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom\") pod \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.494210 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz" (OuterVolumeSpecName: "kube-api-access-7c6lz") pod "cdfda925-e99e-45fc-9fe8-c91b77e3179e" (UID: "cdfda925-e99e-45fc-9fe8-c91b77e3179e"). InnerVolumeSpecName "kube-api-access-7c6lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.505162 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cdfda925-e99e-45fc-9fe8-c91b77e3179e" (UID: "cdfda925-e99e-45fc-9fe8-c91b77e3179e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.528615 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdfda925-e99e-45fc-9fe8-c91b77e3179e" (UID: "cdfda925-e99e-45fc-9fe8-c91b77e3179e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.535769 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data" (OuterVolumeSpecName: "config-data") pod "cdfda925-e99e-45fc-9fe8-c91b77e3179e" (UID: "cdfda925-e99e-45fc-9fe8-c91b77e3179e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.572861 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c6lz\" (UniqueName: \"kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.572901 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.572914 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.572926 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:03 crc kubenswrapper[5136]: I0320 08:57:03.106074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5644df8c69-t5dqn" event={"ID":"cdfda925-e99e-45fc-9fe8-c91b77e3179e","Type":"ContainerDied","Data":"fb162b157e354506481d1c7139390a7d3e392195416ceb14e79870cc4731ee73"} Mar 20 08:57:03 crc kubenswrapper[5136]: I0320 08:57:03.106120 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:57:03 crc kubenswrapper[5136]: I0320 08:57:03.106415 5136 scope.go:117] "RemoveContainer" containerID="8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" Mar 20 08:57:03 crc kubenswrapper[5136]: I0320 08:57:03.141117 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:57:03 crc kubenswrapper[5136]: I0320 08:57:03.151038 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:57:04 crc kubenswrapper[5136]: I0320 08:57:04.398647 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:57:04 crc kubenswrapper[5136]: E0320 08:57:04.398957 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:57:04 crc kubenswrapper[5136]: I0320 08:57:04.410845 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" path="/var/lib/kubelet/pods/cdfda925-e99e-45fc-9fe8-c91b77e3179e/volumes" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.028368 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029050 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerName="heat-engine" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029062 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerName="heat-engine" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029075 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3f75f3-0821-4381-9b69-18074378cbf3" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029082 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3f75f3-0821-4381-9b69-18074378cbf3" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029097 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029103 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029117 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029124 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029137 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029172 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029188 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029194 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029511 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029521 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerName="heat-engine" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029536 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029547 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029560 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029568 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa3f75f3-0821-4381-9b69-18074378cbf3" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029579 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029740 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029748 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.030782 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.044489 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.071708 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.071784 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wsk\" (UniqueName: \"kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.071911 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.174040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.174114 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7wsk\" (UniqueName: \"kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.174191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.174590 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.174600 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.197053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7wsk\" (UniqueName: \"kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.380311 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.898588 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:08 crc kubenswrapper[5136]: I0320 08:57:08.156268 5136 generic.go:334] "Generic (PLEG): container finished" podID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerID="f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77" exitCode=0 Mar 20 08:57:08 crc kubenswrapper[5136]: I0320 08:57:08.156361 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerDied","Data":"f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77"} Mar 20 08:57:08 crc kubenswrapper[5136]: I0320 08:57:08.156537 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerStarted","Data":"34fda41cca48459bc52e23aa7dfd2ccdf3b10e1014efbeb2b077c36782fe119e"} Mar 20 08:57:10 crc kubenswrapper[5136]: I0320 08:57:10.181358 5136 generic.go:334] "Generic (PLEG): container finished" podID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerID="ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472" exitCode=0 Mar 20 08:57:10 crc kubenswrapper[5136]: I0320 08:57:10.181575 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerDied","Data":"ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472"} Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.055606 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-grfwk"] Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.068163 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-grfwk"] Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.195251 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerStarted","Data":"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c"} Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.221161 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4dn6b" podStartSLOduration=2.778902193 podStartE2EDuration="5.221121345s" podCreationTimestamp="2026-03-20 08:57:06 +0000 UTC" firstStartedPulling="2026-03-20 08:57:08.158743073 +0000 UTC m=+7660.418054224" lastFinishedPulling="2026-03-20 08:57:10.600962195 +0000 UTC m=+7662.860273376" observedRunningTime="2026-03-20 08:57:11.209530456 +0000 UTC m=+7663.468841627" watchObservedRunningTime="2026-03-20 08:57:11.221121345 +0000 UTC m=+7663.480432496" Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.416096 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.152:8443: connect: connection refused" Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.416205 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.060963 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178478 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj6q7\" (UniqueName: \"kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178569 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178597 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178626 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178874 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178913 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178937 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178980 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs" (OuterVolumeSpecName: "logs") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.179367 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.185867 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.185943 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7" (OuterVolumeSpecName: "kube-api-access-pj6q7") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "kube-api-access-pj6q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.226632 5136 generic.go:334] "Generic (PLEG): container finished" podID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerID="2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733" exitCode=137 Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.227744 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerDied","Data":"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733"} Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.227767 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.227791 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerDied","Data":"b1ee61ebca0b68cd44117cc6cff786c05a69987a3f3427a20fa4db8597ed371f"} Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.227826 5136 scope.go:117] "RemoveContainer" containerID="60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.237405 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts" (OuterVolumeSpecName: "scripts") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.245910 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.256362 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data" (OuterVolumeSpecName: "config-data") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.269524 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280912 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280943 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280956 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj6q7\" (UniqueName: \"kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280966 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280974 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280985 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.408221 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6db9e6-4059-4911-b008-680848fffdbe" path="/var/lib/kubelet/pods/4c6db9e6-4059-4911-b008-680848fffdbe/volumes" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.526401 5136 scope.go:117] "RemoveContainer" containerID="2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.556528 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.566239 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.597803 5136 scope.go:117] "RemoveContainer" containerID="60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0" Mar 20 08:57:12 crc kubenswrapper[5136]: E0320 08:57:12.598237 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0\": container with ID starting with 60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0 not found: ID does not exist" containerID="60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.598297 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0"} err="failed to get container status \"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0\": rpc error: code = NotFound desc = could not find container \"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0\": container with ID starting with 60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0 not found: ID does not exist" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.598343 5136 scope.go:117] "RemoveContainer" containerID="2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733" Mar 20 08:57:12 crc kubenswrapper[5136]: E0320 08:57:12.598764 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733\": container with ID starting with 2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733 not found: ID does not exist" containerID="2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.598926 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733"} err="failed to get container status \"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733\": rpc error: code = NotFound desc = could not find container \"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733\": container with ID starting with 2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733 not found: ID does not exist" Mar 20 08:57:14 crc kubenswrapper[5136]: I0320 08:57:14.413049 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" path="/var/lib/kubelet/pods/07e0c938-d0f6-43dc-8864-68149aedc96c/volumes" Mar 20 08:57:15 crc kubenswrapper[5136]: I0320 08:57:15.397009 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:57:15 crc kubenswrapper[5136]: E0320 08:57:15.397671 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:57:17 crc kubenswrapper[5136]: I0320 08:57:17.381905 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:17 crc kubenswrapper[5136]: I0320 08:57:17.382141 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:17 crc kubenswrapper[5136]: I0320 08:57:17.424216 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:18 crc kubenswrapper[5136]: I0320 08:57:18.335838 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.025685 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn"] Mar 20 08:57:19 crc kubenswrapper[5136]: E0320 08:57:19.026139 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon-log" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.026154 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon-log" Mar 20 08:57:19 crc kubenswrapper[5136]: E0320 08:57:19.026178 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.026184 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.026370 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon-log" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.026383 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.027668 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.030393 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.035180 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn"] Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.116625 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.116694 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.116733 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lm62\" (UniqueName: \"kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.219889 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.220112 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.220252 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lm62\" (UniqueName: \"kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.220632 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.220648 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.244624 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lm62\" (UniqueName: \"kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.406936 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.800374 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn"] Mar 20 08:57:20 crc kubenswrapper[5136]: I0320 08:57:20.306585 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerStarted","Data":"5109e573e6cc1cd17b6b8342da492547d9d0b3abd4199b503cf68891b2593fb2"} Mar 20 08:57:20 crc kubenswrapper[5136]: I0320 08:57:20.306624 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerStarted","Data":"fce0d666073b21cca4cea16cc70ca7c1868877700eeea616fb38a6a013d5e1d8"} Mar 20 08:57:21 crc kubenswrapper[5136]: I0320 08:57:21.317053 5136 generic.go:334] "Generic (PLEG): container finished" podID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerID="5109e573e6cc1cd17b6b8342da492547d9d0b3abd4199b503cf68891b2593fb2" exitCode=0 Mar 20 08:57:21 crc kubenswrapper[5136]: I0320 08:57:21.317164 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerDied","Data":"5109e573e6cc1cd17b6b8342da492547d9d0b3abd4199b503cf68891b2593fb2"} Mar 20 08:57:21 crc kubenswrapper[5136]: I0320 08:57:21.588388 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:21 crc kubenswrapper[5136]: I0320 08:57:21.588726 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4dn6b" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="registry-server" containerID="cri-o://47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c" gracePeriod=2 Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.078644 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.200226 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities\") pod \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.200334 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content\") pod \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.200381 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7wsk\" (UniqueName: \"kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk\") pod \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.202663 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities" (OuterVolumeSpecName: "utilities") pod "4f642bb1-6f66-4460-922a-a497e5b3dc6a" (UID: "4f642bb1-6f66-4460-922a-a497e5b3dc6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.209095 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk" (OuterVolumeSpecName: "kube-api-access-c7wsk") pod "4f642bb1-6f66-4460-922a-a497e5b3dc6a" (UID: "4f642bb1-6f66-4460-922a-a497e5b3dc6a"). InnerVolumeSpecName "kube-api-access-c7wsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.237981 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f642bb1-6f66-4460-922a-a497e5b3dc6a" (UID: "4f642bb1-6f66-4460-922a-a497e5b3dc6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.302836 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.302878 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.302894 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7wsk\" (UniqueName: \"kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.328386 5136 generic.go:334] "Generic (PLEG): container finished" podID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerID="47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c" exitCode=0 Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.328435 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerDied","Data":"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c"} Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.328455 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.328473 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerDied","Data":"34fda41cca48459bc52e23aa7dfd2ccdf3b10e1014efbeb2b077c36782fe119e"} Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.328496 5136 scope.go:117] "RemoveContainer" containerID="47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.368486 5136 scope.go:117] "RemoveContainer" containerID="ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.370368 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.385219 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.390272 5136 scope.go:117] "RemoveContainer" containerID="f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.408956 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" path="/var/lib/kubelet/pods/4f642bb1-6f66-4460-922a-a497e5b3dc6a/volumes" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.478952 5136 scope.go:117] "RemoveContainer" containerID="47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c" Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.479454 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c\": container with ID starting with 47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c not found: ID does not exist" containerID="47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.479486 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c"} err="failed to get container status \"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c\": rpc error: code = NotFound desc = could not find container \"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c\": container with ID starting with 47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c not found: ID does not exist" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.479511 5136 scope.go:117] "RemoveContainer" containerID="ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472" Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.480077 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472\": container with ID starting with ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472 not found: ID does not exist" containerID="ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.480222 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472"} err="failed to get container status \"ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472\": rpc error: code = NotFound desc = could not find container \"ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472\": container with ID starting with ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472 not found: ID does not exist" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.480248 5136 scope.go:117] "RemoveContainer" containerID="f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77" Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.481429 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77\": container with ID starting with f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77 not found: ID does not exist" containerID="f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.481458 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77"} err="failed to get container status \"f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77\": rpc error: code = NotFound desc = could not find container \"f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77\": container with ID starting with f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77 not found: ID does not exist" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.599416 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.600211 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="extract-content" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.600228 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="extract-content" Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.600241 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="extract-utilities" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.600249 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="extract-utilities" Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.600272 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="registry-server" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.600279 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="registry-server" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.600493 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="registry-server" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.602123 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.610134 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.709882 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.709972 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.710043 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncj89\" (UniqueName: \"kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.811094 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.811203 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.811284 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncj89\" (UniqueName: \"kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.813057 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.813365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.829983 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncj89\" (UniqueName: \"kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.930480 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:23 crc kubenswrapper[5136]: I0320 08:57:23.340382 5136 generic.go:334] "Generic (PLEG): container finished" podID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerID="d88e92b7a96a38a0d91605318624250fcbed4ac0cc4ccaad1aaad0ef4497cab2" exitCode=0 Mar 20 08:57:23 crc kubenswrapper[5136]: I0320 08:57:23.340515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerDied","Data":"d88e92b7a96a38a0d91605318624250fcbed4ac0cc4ccaad1aaad0ef4497cab2"} Mar 20 08:57:23 crc kubenswrapper[5136]: I0320 08:57:23.911602 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:57:23 crc kubenswrapper[5136]: W0320 08:57:23.917339 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbbd891d_c8eb_404c_8255_2a3bba4035ee.slice/crio-b8ea34d57154026d7acfa78afb0581c846280208e141611a13e5cd82670eae89 WatchSource:0}: Error finding container b8ea34d57154026d7acfa78afb0581c846280208e141611a13e5cd82670eae89: Status 404 returned error can't find the container with id b8ea34d57154026d7acfa78afb0581c846280208e141611a13e5cd82670eae89 Mar 20 08:57:24 crc kubenswrapper[5136]: I0320 08:57:24.350544 5136 generic.go:334] "Generic (PLEG): container finished" podID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerID="044b3424c8e96991d8d6c7d14a0f5e375547bb5b4b46583de32c8b8c28a5ea26" exitCode=0 Mar 20 08:57:24 crc kubenswrapper[5136]: I0320 08:57:24.350712 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerDied","Data":"044b3424c8e96991d8d6c7d14a0f5e375547bb5b4b46583de32c8b8c28a5ea26"} Mar 20 08:57:24 crc kubenswrapper[5136]: I0320 08:57:24.352720 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerID="3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd" exitCode=0 Mar 20 08:57:24 crc kubenswrapper[5136]: I0320 08:57:24.352776 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerDied","Data":"3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd"} Mar 20 08:57:24 crc kubenswrapper[5136]: I0320 08:57:24.352800 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerStarted","Data":"b8ea34d57154026d7acfa78afb0581c846280208e141611a13e5cd82670eae89"} Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.714988 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.775229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lm62\" (UniqueName: \"kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62\") pod \"380bd027-6e4d-49b8-af6b-db5cd8b06635\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.775366 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle\") pod \"380bd027-6e4d-49b8-af6b-db5cd8b06635\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.775441 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util\") pod \"380bd027-6e4d-49b8-af6b-db5cd8b06635\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.778294 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle" (OuterVolumeSpecName: "bundle") pod "380bd027-6e4d-49b8-af6b-db5cd8b06635" (UID: "380bd027-6e4d-49b8-af6b-db5cd8b06635"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.782651 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62" (OuterVolumeSpecName: "kube-api-access-2lm62") pod "380bd027-6e4d-49b8-af6b-db5cd8b06635" (UID: "380bd027-6e4d-49b8-af6b-db5cd8b06635"). InnerVolumeSpecName "kube-api-access-2lm62". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.786608 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util" (OuterVolumeSpecName: "util") pod "380bd027-6e4d-49b8-af6b-db5cd8b06635" (UID: "380bd027-6e4d-49b8-af6b-db5cd8b06635"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.878427 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lm62\" (UniqueName: \"kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.878481 5136 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.878503 5136 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:26 crc kubenswrapper[5136]: I0320 08:57:26.375109 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerDied","Data":"fce0d666073b21cca4cea16cc70ca7c1868877700eeea616fb38a6a013d5e1d8"} Mar 20 08:57:26 crc kubenswrapper[5136]: I0320 08:57:26.375152 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fce0d666073b21cca4cea16cc70ca7c1868877700eeea616fb38a6a013d5e1d8" Mar 20 08:57:26 crc kubenswrapper[5136]: I0320 08:57:26.375215 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:26 crc kubenswrapper[5136]: I0320 08:57:26.377043 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerStarted","Data":"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515"} Mar 20 08:57:26 crc kubenswrapper[5136]: I0320 08:57:26.396992 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:57:26 crc kubenswrapper[5136]: E0320 08:57:26.397363 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:57:28 crc kubenswrapper[5136]: I0320 08:57:28.403242 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerID="9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515" exitCode=0 Mar 20 08:57:28 crc kubenswrapper[5136]: I0320 08:57:28.417636 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerDied","Data":"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515"} Mar 20 08:57:29 crc kubenswrapper[5136]: I0320 08:57:29.415868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerStarted","Data":"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1"} Mar 20 08:57:29 crc kubenswrapper[5136]: I0320 08:57:29.442562 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w4zdj" podStartSLOduration=2.872326812 podStartE2EDuration="7.442544897s" podCreationTimestamp="2026-03-20 08:57:22 +0000 UTC" firstStartedPulling="2026-03-20 08:57:24.354804069 +0000 UTC m=+7676.614115210" lastFinishedPulling="2026-03-20 08:57:28.925022144 +0000 UTC m=+7681.184333295" observedRunningTime="2026-03-20 08:57:29.434058405 +0000 UTC m=+7681.693369556" watchObservedRunningTime="2026-03-20 08:57:29.442544897 +0000 UTC m=+7681.701856048" Mar 20 08:57:32 crc kubenswrapper[5136]: I0320 08:57:32.931293 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:32 crc kubenswrapper[5136]: I0320 08:57:32.931805 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:33 crc kubenswrapper[5136]: I0320 08:57:33.977291 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" probeResult="failure" output=< Mar 20 08:57:33 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:57:33 crc kubenswrapper[5136]: > Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.950751 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx"] Mar 20 08:57:37 crc kubenswrapper[5136]: E0320 08:57:37.951526 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="util" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.951538 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="util" Mar 20 08:57:37 crc kubenswrapper[5136]: E0320 08:57:37.951574 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="extract" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.951581 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="extract" Mar 20 08:57:37 crc kubenswrapper[5136]: E0320 08:57:37.951589 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="pull" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.951595 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="pull" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.951757 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="extract" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.952365 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.962735 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.963257 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-m76t5" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.963390 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.975129 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx"] Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.147498 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzj2k\" (UniqueName: \"kubernetes.io/projected/b1998fd9-5100-4819-83d9-61c453df2121-kube-api-access-vzj2k\") pod \"obo-prometheus-operator-8ff7d675-pw7kx\" (UID: \"b1998fd9-5100-4819-83d9-61c453df2121\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.249746 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzj2k\" (UniqueName: \"kubernetes.io/projected/b1998fd9-5100-4819-83d9-61c453df2121-kube-api-access-vzj2k\") pod \"obo-prometheus-operator-8ff7d675-pw7kx\" (UID: \"b1998fd9-5100-4819-83d9-61c453df2121\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.280619 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzj2k\" (UniqueName: \"kubernetes.io/projected/b1998fd9-5100-4819-83d9-61c453df2121-kube-api-access-vzj2k\") pod \"obo-prometheus-operator-8ff7d675-pw7kx\" (UID: \"b1998fd9-5100-4819-83d9-61c453df2121\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.346917 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k"] Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.349035 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.352707 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.353438 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qcbcv" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.373115 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg"] Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.387103 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.401872 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k"] Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.460632 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg"] Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.468208 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.468807 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.571030 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.571283 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.571356 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.571561 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.578275 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.578352 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.578398 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.677870 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.678215 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.689711 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.710226 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.711474 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.731457 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.005616 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-7pqgh"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.006946 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.019241 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-nktsr" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.019412 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.034671 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-7pqgh"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.088608 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6gf\" (UniqueName: \"kubernetes.io/projected/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-kube-api-access-dt6gf\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.088678 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.190892 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt6gf\" (UniqueName: \"kubernetes.io/projected/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-kube-api-access-dt6gf\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.190967 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.199790 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.208146 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt6gf\" (UniqueName: \"kubernetes.io/projected/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-kube-api-access-dt6gf\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.338430 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.366898 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.369980 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.397334 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:57:39 crc kubenswrapper[5136]: E0320 08:57:39.397594 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.541010 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" event={"ID":"6e3b7b66-720f-451e-b76c-d14672876450","Type":"ContainerStarted","Data":"cea3ed687a99dec171352d1ab2b937889fe1bed3a13c96dde59c4ecdbf42ad9e"} Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.543675 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" event={"ID":"e648d436-8985-4d18-83b2-8401e5e3b301","Type":"ContainerStarted","Data":"a33dccb87d830c68fee4d3ea84fa0760f6c1fc50057aa94fe495b825fcf8647b"} Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.621285 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.725327 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-7979496b84-bg2n6"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.726784 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.734576 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-4jp76" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.734778 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.750868 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-7979496b84-bg2n6"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.809040 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e3c2d08-6905-419d-a0d6-f4935119b632-openshift-service-ca\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.809189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-webhook-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.809299 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-apiservice-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.809371 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crm5b\" (UniqueName: \"kubernetes.io/projected/0e3c2d08-6905-419d-a0d6-f4935119b632-kube-api-access-crm5b\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.913272 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-webhook-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.913414 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-apiservice-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.913481 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crm5b\" (UniqueName: \"kubernetes.io/projected/0e3c2d08-6905-419d-a0d6-f4935119b632-kube-api-access-crm5b\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.913522 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e3c2d08-6905-419d-a0d6-f4935119b632-openshift-service-ca\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.914617 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e3c2d08-6905-419d-a0d6-f4935119b632-openshift-service-ca\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.919541 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-webhook-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.922451 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-apiservice-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.942494 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crm5b\" (UniqueName: \"kubernetes.io/projected/0e3c2d08-6905-419d-a0d6-f4935119b632-kube-api-access-crm5b\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.007682 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-7pqgh"] Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.049778 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.102085 5136 scope.go:117] "RemoveContainer" containerID="4b1f554c7f496a2460aeebf430477f38851975d2eafc2fb7735f082f6ef9d928" Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.134597 5136 scope.go:117] "RemoveContainer" containerID="340051113d29f7efbc695a906b0061f19d9714ea49c2643f84b952194ea4de4c" Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.178565 5136 scope.go:117] "RemoveContainer" containerID="87d4064c210f2c8ecf2546f67dd8fe9ef436d4f291209d0fa6a7f5ba97b6e5e4" Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.566948 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" event={"ID":"b1998fd9-5100-4819-83d9-61c453df2121","Type":"ContainerStarted","Data":"eacfb59452e84ab3eb8207170515d7e79a24614d7f794cbdecbcda76f8c132bc"} Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.569184 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" event={"ID":"cbf95789-daee-44bb-9d6a-a5b503c0b1e1","Type":"ContainerStarted","Data":"dc8dea89f94e20db496cc3471f721a6dd4cc4151eab5e1b584cdcc8555c0394d"} Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.581165 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-7979496b84-bg2n6"] Mar 20 08:57:40 crc kubenswrapper[5136]: W0320 08:57:40.587038 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e3c2d08_6905_419d_a0d6_f4935119b632.slice/crio-1e050de244828ed81a807be1e33f55a100599a996fc71d325d105a7b4b35d594 WatchSource:0}: Error finding container 1e050de244828ed81a807be1e33f55a100599a996fc71d325d105a7b4b35d594: Status 404 returned error can't find the container with id 1e050de244828ed81a807be1e33f55a100599a996fc71d325d105a7b4b35d594 Mar 20 08:57:41 crc kubenswrapper[5136]: I0320 08:57:41.589756 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-7979496b84-bg2n6" event={"ID":"0e3c2d08-6905-419d-a0d6-f4935119b632","Type":"ContainerStarted","Data":"1e050de244828ed81a807be1e33f55a100599a996fc71d325d105a7b4b35d594"} Mar 20 08:57:42 crc kubenswrapper[5136]: I0320 08:57:42.601863 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" event={"ID":"6e3b7b66-720f-451e-b76c-d14672876450","Type":"ContainerStarted","Data":"e7855e40b73948ba4b2fb8bca623cf38e57459a85f6f13ef40af1dfa854e6ffb"} Mar 20 08:57:42 crc kubenswrapper[5136]: I0320 08:57:42.630678 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" podStartSLOduration=1.873369108 podStartE2EDuration="4.630656355s" podCreationTimestamp="2026-03-20 08:57:38 +0000 UTC" firstStartedPulling="2026-03-20 08:57:39.383219373 +0000 UTC m=+7691.642530524" lastFinishedPulling="2026-03-20 08:57:42.14050661 +0000 UTC m=+7694.399817771" observedRunningTime="2026-03-20 08:57:42.621129279 +0000 UTC m=+7694.880440430" watchObservedRunningTime="2026-03-20 08:57:42.630656355 +0000 UTC m=+7694.889967506" Mar 20 08:57:43 crc kubenswrapper[5136]: I0320 08:57:43.615741 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" event={"ID":"e648d436-8985-4d18-83b2-8401e5e3b301","Type":"ContainerStarted","Data":"e1460726678122174e54c7789772e2cae030b4d250d808aa887561a2f704d46e"} Mar 20 08:57:43 crc kubenswrapper[5136]: I0320 08:57:43.641252 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" podStartSLOduration=2.880248452 podStartE2EDuration="5.641232523s" podCreationTimestamp="2026-03-20 08:57:38 +0000 UTC" firstStartedPulling="2026-03-20 08:57:39.38539235 +0000 UTC m=+7691.644703501" lastFinishedPulling="2026-03-20 08:57:42.146376411 +0000 UTC m=+7694.405687572" observedRunningTime="2026-03-20 08:57:43.633142442 +0000 UTC m=+7695.892453593" watchObservedRunningTime="2026-03-20 08:57:43.641232523 +0000 UTC m=+7695.900543674" Mar 20 08:57:43 crc kubenswrapper[5136]: I0320 08:57:43.989519 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" probeResult="failure" output=< Mar 20 08:57:43 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:57:43 crc kubenswrapper[5136]: > Mar 20 08:57:46 crc kubenswrapper[5136]: I0320 08:57:46.655502 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" event={"ID":"b1998fd9-5100-4819-83d9-61c453df2121","Type":"ContainerStarted","Data":"d13ac4f372725086a73d27ac54f5430b9e6d28970d2833f5045e2b3be8f55e00"} Mar 20 08:57:46 crc kubenswrapper[5136]: I0320 08:57:46.657384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-7979496b84-bg2n6" event={"ID":"0e3c2d08-6905-419d-a0d6-f4935119b632","Type":"ContainerStarted","Data":"8e071dd5716df73fb0d84e99d16769e4f32df706d116acd68d97f41d0f25e202"} Mar 20 08:57:46 crc kubenswrapper[5136]: I0320 08:57:46.657516 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:46 crc kubenswrapper[5136]: I0320 08:57:46.678186 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" podStartSLOduration=3.975694777 podStartE2EDuration="9.678167537s" podCreationTimestamp="2026-03-20 08:57:37 +0000 UTC" firstStartedPulling="2026-03-20 08:57:39.621612564 +0000 UTC m=+7691.880923715" lastFinishedPulling="2026-03-20 08:57:45.324085324 +0000 UTC m=+7697.583396475" observedRunningTime="2026-03-20 08:57:46.67730154 +0000 UTC m=+7698.936612691" watchObservedRunningTime="2026-03-20 08:57:46.678167537 +0000 UTC m=+7698.937478688" Mar 20 08:57:46 crc kubenswrapper[5136]: I0320 08:57:46.702679 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-7979496b84-bg2n6" podStartSLOduration=2.969804684 podStartE2EDuration="7.702656995s" podCreationTimestamp="2026-03-20 08:57:39 +0000 UTC" firstStartedPulling="2026-03-20 08:57:40.588702385 +0000 UTC m=+7692.848013536" lastFinishedPulling="2026-03-20 08:57:45.321554696 +0000 UTC m=+7697.580865847" observedRunningTime="2026-03-20 08:57:46.696173795 +0000 UTC m=+7698.955484936" watchObservedRunningTime="2026-03-20 08:57:46.702656995 +0000 UTC m=+7698.961968146" Mar 20 08:57:50 crc kubenswrapper[5136]: I0320 08:57:50.051954 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:50 crc kubenswrapper[5136]: I0320 08:57:50.703004 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" event={"ID":"cbf95789-daee-44bb-9d6a-a5b503c0b1e1","Type":"ContainerStarted","Data":"d6d4627b7cc4555652fe786bd2e3248b1091ec276f5b7264a298618c4e4db0c8"} Mar 20 08:57:50 crc kubenswrapper[5136]: I0320 08:57:50.703550 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:50 crc kubenswrapper[5136]: I0320 08:57:50.705156 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:50 crc kubenswrapper[5136]: I0320 08:57:50.723009 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" podStartSLOduration=3.037612434 podStartE2EDuration="12.722960305s" podCreationTimestamp="2026-03-20 08:57:38 +0000 UTC" firstStartedPulling="2026-03-20 08:57:40.049575734 +0000 UTC m=+7692.308886885" lastFinishedPulling="2026-03-20 08:57:49.734923605 +0000 UTC m=+7701.994234756" observedRunningTime="2026-03-20 08:57:50.719022734 +0000 UTC m=+7702.978333885" watchObservedRunningTime="2026-03-20 08:57:50.722960305 +0000 UTC m=+7702.982271456" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.275609 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.276091 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="8f874f73-4453-44c8-b1d9-52559489bead" containerName="openstackclient" containerID="cri-o://9bbc0f5018299d8801809b126e8536554b83592717e71efd04c53bc88080264e" gracePeriod=2 Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.307483 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.346506 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: E0320 08:57:53.347084 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f874f73-4453-44c8-b1d9-52559489bead" containerName="openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.347106 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f874f73-4453-44c8-b1d9-52559489bead" containerName="openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.347404 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f874f73-4453-44c8-b1d9-52559489bead" containerName="openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.348240 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.357380 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8f874f73-4453-44c8-b1d9-52559489bead" podUID="602b2568-b048-42d1-afbd-b20ebe8e7869" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.358420 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.374507 5136 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602b2568-b048-42d1-afbd-b20ebe8e7869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T08:57:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T08:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T08:57:53Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T08:57:53Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:39fc4cb70f516d8e9b48225bc0a253ef\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\\\",\\\"name\\\":\\\"combined-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x9p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T08:57:53Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.391880 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: E0320 08:57:53.392811 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-5x9p4 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="602b2568-b048-42d1-afbd-b20ebe8e7869" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.403510 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:57:53 crc kubenswrapper[5136]: E0320 08:57:53.403770 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.413711 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.421216 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.421272 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9p4\" (UniqueName: \"kubernetes.io/projected/602b2568-b048-42d1-afbd-b20ebe8e7869-kube-api-access-5x9p4\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.421383 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.421428 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.429642 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.431530 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.443202 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="602b2568-b048-42d1-afbd-b20ebe8e7869" podUID="85e488a7-477c-4368-a461-725ccdc6987e" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.485881 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.524426 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgcpd\" (UniqueName: \"kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.524612 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.524739 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.524784 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.524970 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.525032 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9p4\" (UniqueName: \"kubernetes.io/projected/602b2568-b048-42d1-afbd-b20ebe8e7869-kube-api-access-5x9p4\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.525129 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.525183 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.526976 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.528324 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.530064 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: E0320 08:57:53.535314 5136 projected.go:194] Error preparing data for projected volume kube-api-access-5x9p4 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (602b2568-b048-42d1-afbd-b20ebe8e7869) does not match the UID in record. The object might have been deleted and then recreated Mar 20 08:57:53 crc kubenswrapper[5136]: E0320 08:57:53.556631 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/602b2568-b048-42d1-afbd-b20ebe8e7869-kube-api-access-5x9p4 podName:602b2568-b048-42d1-afbd-b20ebe8e7869 nodeName:}" failed. No retries permitted until 2026-03-20 08:57:54.056605005 +0000 UTC m=+7706.315916156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5x9p4" (UniqueName: "kubernetes.io/projected/602b2568-b048-42d1-afbd-b20ebe8e7869-kube-api-access-5x9p4") pod "openstackclient" (UID: "602b2568-b048-42d1-afbd-b20ebe8e7869") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (602b2568-b048-42d1-afbd-b20ebe8e7869) does not match the UID in record. The object might have been deleted and then recreated Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.535393 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-dmhl7" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.557802 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.559322 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.564363 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.634234 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.634359 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.634384 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.634419 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fc9n\" (UniqueName: \"kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n\") pod \"kube-state-metrics-0\" (UID: \"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac\") " pod="openstack/kube-state-metrics-0" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.634455 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgcpd\" (UniqueName: \"kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.635935 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.639350 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.642359 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.665955 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgcpd\" (UniqueName: \"kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.736506 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fc9n\" (UniqueName: \"kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n\") pod \"kube-state-metrics-0\" (UID: \"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac\") " pod="openstack/kube-state-metrics-0" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.770023 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.777288 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="602b2568-b048-42d1-afbd-b20ebe8e7869" podUID="85e488a7-477c-4368-a461-725ccdc6987e" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.777699 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.781846 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fc9n\" (UniqueName: \"kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n\") pod \"kube-state-metrics-0\" (UID: \"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac\") " pod="openstack/kube-state-metrics-0" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.846723 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.860689 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="602b2568-b048-42d1-afbd-b20ebe8e7869" podUID="85e488a7-477c-4368-a461-725ccdc6987e" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.944937 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret\") pod \"602b2568-b048-42d1-afbd-b20ebe8e7869\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.945074 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config\") pod \"602b2568-b048-42d1-afbd-b20ebe8e7869\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.945092 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle\") pod \"602b2568-b048-42d1-afbd-b20ebe8e7869\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.945616 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9p4\" (UniqueName: \"kubernetes.io/projected/602b2568-b048-42d1-afbd-b20ebe8e7869-kube-api-access-5x9p4\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.957059 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "602b2568-b048-42d1-afbd-b20ebe8e7869" (UID: "602b2568-b048-42d1-afbd-b20ebe8e7869"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.963377 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "602b2568-b048-42d1-afbd-b20ebe8e7869" (UID: "602b2568-b048-42d1-afbd-b20ebe8e7869"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.969110 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "602b2568-b048-42d1-afbd-b20ebe8e7869" (UID: "602b2568-b048-42d1-afbd-b20ebe8e7869"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.000041 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" probeResult="failure" output=< Mar 20 08:57:54 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:57:54 crc kubenswrapper[5136]: > Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.050085 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.050126 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.050135 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.064454 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.457236 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602b2568-b048-42d1-afbd-b20ebe8e7869" path="/var/lib/kubelet/pods/602b2568-b048-42d1-afbd-b20ebe8e7869/volumes" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.464596 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.483080 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.514057 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.514341 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-jqddc" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.514483 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.514592 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.514689 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574296 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574436 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574487 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574529 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrfj8\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574563 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574580 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.619035 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676237 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrfj8\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676294 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676445 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676497 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676529 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676586 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.680297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.685467 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.714598 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.714948 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.719522 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.719997 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.748683 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrfj8\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.799330 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.808800 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.827371 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="602b2568-b048-42d1-afbd-b20ebe8e7869" podUID="85e488a7-477c-4368-a461-725ccdc6987e" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.917661 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="602b2568-b048-42d1-afbd-b20ebe8e7869" podUID="85e488a7-477c-4368-a461-725ccdc6987e" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.012755 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.103712 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.634095 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.636971 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.643929 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.646671 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.647081 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.648127 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.648162 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.648999 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jt99d" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.649122 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.651929 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.669046 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723058 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723130 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723174 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723245 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723268 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723321 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723348 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723394 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723491 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdd45\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723523 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.816135 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac","Type":"ContainerStarted","Data":"6ff5903a5cc26ba52c8004bed71ec6d792235b4e0088d5c041ffe70d1e0d7e6a"} Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.817738 5136 generic.go:334] "Generic (PLEG): container finished" podID="8f874f73-4453-44c8-b1d9-52559489bead" containerID="9bbc0f5018299d8801809b126e8536554b83592717e71efd04c53bc88080264e" exitCode=137 Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825512 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825551 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825598 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825626 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825663 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825721 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdd45\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825739 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825762 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825832 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.827361 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.827891 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.828889 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.841645 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.843157 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.843189 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad804f31e72686a367ca365b9ecb0de79de25c176ca5187be7f65bd43ec38926/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.854823 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"85e488a7-477c-4368-a461-725ccdc6987e","Type":"ContainerStarted","Data":"affe30d23ee8aa4f7e6226db1cbd64d0ecf498b5e2794b62dd6f4aa04fc27719"} Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.883419 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.883951 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.884498 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdd45\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.884850 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.888919 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.895580 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.975568 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.988901 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.168356 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.351563 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8ff7\" (UniqueName: \"kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7\") pod \"8f874f73-4453-44c8-b1d9-52559489bead\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.351941 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle\") pod \"8f874f73-4453-44c8-b1d9-52559489bead\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.352125 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret\") pod \"8f874f73-4453-44c8-b1d9-52559489bead\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.352203 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config\") pod \"8f874f73-4453-44c8-b1d9-52559489bead\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.362087 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7" (OuterVolumeSpecName: "kube-api-access-r8ff7") pod "8f874f73-4453-44c8-b1d9-52559489bead" (UID: "8f874f73-4453-44c8-b1d9-52559489bead"). InnerVolumeSpecName "kube-api-access-r8ff7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.441894 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8f874f73-4453-44c8-b1d9-52559489bead" (UID: "8f874f73-4453-44c8-b1d9-52559489bead"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.454733 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.454760 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8ff7\" (UniqueName: \"kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.463350 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f874f73-4453-44c8-b1d9-52559489bead" (UID: "8f874f73-4453-44c8-b1d9-52559489bead"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.556763 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.739089 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8f874f73-4453-44c8-b1d9-52559489bead" (UID: "8f874f73-4453-44c8-b1d9-52559489bead"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.774766 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.777620 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:56 crc kubenswrapper[5136]: W0320 08:57:56.792718 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f27cf85_7e28_48aa_b93a_0647c59a7dcc.slice/crio-4ab763507cd2c9da05be42fe8e977f93c3eb5200fdec5c434fb058393a4e3e30 WatchSource:0}: Error finding container 4ab763507cd2c9da05be42fe8e977f93c3eb5200fdec5c434fb058393a4e3e30: Status 404 returned error can't find the container with id 4ab763507cd2c9da05be42fe8e977f93c3eb5200fdec5c434fb058393a4e3e30 Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.904029 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerStarted","Data":"b199eb0de00dd4b5665ed78ec596b43fb8d24fc2002bd3f4dd356a32c51b4138"} Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.906355 5136 scope.go:117] "RemoveContainer" containerID="9bbc0f5018299d8801809b126e8536554b83592717e71efd04c53bc88080264e" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.906505 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.927456 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"85e488a7-477c-4368-a461-725ccdc6987e","Type":"ContainerStarted","Data":"c0b4485ed3a40b504bf79337010ee963ceceae4d07ffa4c5f67bc76669810e3e"} Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.931411 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerStarted","Data":"4ab763507cd2c9da05be42fe8e977f93c3eb5200fdec5c434fb058393a4e3e30"} Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.948551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac","Type":"ContainerStarted","Data":"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76"} Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.949042 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.950279 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.950260944 podStartE2EDuration="3.950260944s" podCreationTimestamp="2026-03-20 08:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:57:56.943449463 +0000 UTC m=+7709.202760624" watchObservedRunningTime="2026-03-20 08:57:56.950260944 +0000 UTC m=+7709.209572095" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.980678 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.4246009490000002 podStartE2EDuration="3.980652845s" podCreationTimestamp="2026-03-20 08:57:53 +0000 UTC" firstStartedPulling="2026-03-20 08:57:55.084607103 +0000 UTC m=+7707.343918254" lastFinishedPulling="2026-03-20 08:57:55.640658999 +0000 UTC m=+7707.899970150" observedRunningTime="2026-03-20 08:57:56.971569274 +0000 UTC m=+7709.230880425" watchObservedRunningTime="2026-03-20 08:57:56.980652845 +0000 UTC m=+7709.239963996" Mar 20 08:57:58 crc kubenswrapper[5136]: I0320 08:57:58.411080 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f874f73-4453-44c8-b1d9-52559489bead" path="/var/lib/kubelet/pods/8f874f73-4453-44c8-b1d9-52559489bead/volumes" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.153990 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566618-mxdtq"] Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.155960 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.162191 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.162769 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.162938 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.174855 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566618-mxdtq"] Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.282139 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjjf\" (UniqueName: \"kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf\") pod \"auto-csr-approver-29566618-mxdtq\" (UID: \"c9b36af1-10a6-412b-a488-892560533fbc\") " pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.384105 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjjf\" (UniqueName: \"kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf\") pod \"auto-csr-approver-29566618-mxdtq\" (UID: \"c9b36af1-10a6-412b-a488-892560533fbc\") " pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.406063 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjjf\" (UniqueName: \"kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf\") pod \"auto-csr-approver-29566618-mxdtq\" (UID: \"c9b36af1-10a6-412b-a488-892560533fbc\") " pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.510563 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.008381 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566618-mxdtq"] Mar 20 08:58:01 crc kubenswrapper[5136]: W0320 08:58:01.015928 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9b36af1_10a6_412b_a488_892560533fbc.slice/crio-848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f WatchSource:0}: Error finding container 848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f: Status 404 returned error can't find the container with id 848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.455868 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lbsbr"] Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.457852 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.476019 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbsbr"] Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.611775 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-utilities\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.612085 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46xbq\" (UniqueName: \"kubernetes.io/projected/221d005e-2b68-4835-9bcc-69b3d391e37f-kube-api-access-46xbq\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.612170 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-catalog-content\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.714361 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-utilities\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.714438 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46xbq\" (UniqueName: \"kubernetes.io/projected/221d005e-2b68-4835-9bcc-69b3d391e37f-kube-api-access-46xbq\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.714463 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-catalog-content\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.715021 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-utilities\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.715034 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-catalog-content\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.736307 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46xbq\" (UniqueName: \"kubernetes.io/projected/221d005e-2b68-4835-9bcc-69b3d391e37f-kube-api-access-46xbq\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.774364 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:02 crc kubenswrapper[5136]: I0320 08:58:02.089361 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" event={"ID":"c9b36af1-10a6-412b-a488-892560533fbc","Type":"ContainerStarted","Data":"848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f"} Mar 20 08:58:02 crc kubenswrapper[5136]: I0320 08:58:02.327330 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbsbr"] Mar 20 08:58:02 crc kubenswrapper[5136]: W0320 08:58:02.385304 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod221d005e_2b68_4835_9bcc_69b3d391e37f.slice/crio-8ac3373187bdeaa0234df40ace0e547f666a457b3d45d5370978b78068db5a24 WatchSource:0}: Error finding container 8ac3373187bdeaa0234df40ace0e547f666a457b3d45d5370978b78068db5a24: Status 404 returned error can't find the container with id 8ac3373187bdeaa0234df40ace0e547f666a457b3d45d5370978b78068db5a24 Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.103524 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" event={"ID":"c9b36af1-10a6-412b-a488-892560533fbc","Type":"ContainerStarted","Data":"95c081e05a9b5bcdeb5bad36239bef981a1b982b216877bab99876ec036176fc"} Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.106502 5136 generic.go:334] "Generic (PLEG): container finished" podID="221d005e-2b68-4835-9bcc-69b3d391e37f" containerID="ae12d8645e0d7ade8d1cd9301e6c976cb936fab5f6814efd35ed9b9d05ea4df5" exitCode=0 Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.106551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbsbr" event={"ID":"221d005e-2b68-4835-9bcc-69b3d391e37f","Type":"ContainerDied","Data":"ae12d8645e0d7ade8d1cd9301e6c976cb936fab5f6814efd35ed9b9d05ea4df5"} Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.106617 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbsbr" event={"ID":"221d005e-2b68-4835-9bcc-69b3d391e37f","Type":"ContainerStarted","Data":"8ac3373187bdeaa0234df40ace0e547f666a457b3d45d5370978b78068db5a24"} Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.109442 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerStarted","Data":"0f81420fdec828e5dce02cec8a66a0c6cd8324009373b1662fde78097754c972"} Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.113436 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerStarted","Data":"a3cfce9ddd036c7cf63f0412122dd66290e1e5696f18bc8cd0c8f3ee086aeaaa"} Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.131268 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" podStartSLOduration=2.08990668 podStartE2EDuration="3.13124259s" podCreationTimestamp="2026-03-20 08:58:00 +0000 UTC" firstStartedPulling="2026-03-20 08:58:01.018630503 +0000 UTC m=+7713.277941654" lastFinishedPulling="2026-03-20 08:58:02.059966413 +0000 UTC m=+7714.319277564" observedRunningTime="2026-03-20 08:58:03.119603189 +0000 UTC m=+7715.378914340" watchObservedRunningTime="2026-03-20 08:58:03.13124259 +0000 UTC m=+7715.390553761" Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.985357 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" probeResult="failure" output=< Mar 20 08:58:03 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:58:03 crc kubenswrapper[5136]: > Mar 20 08:58:04 crc kubenswrapper[5136]: I0320 08:58:04.071786 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 20 08:58:04 crc kubenswrapper[5136]: I0320 08:58:04.126415 5136 generic.go:334] "Generic (PLEG): container finished" podID="c9b36af1-10a6-412b-a488-892560533fbc" containerID="95c081e05a9b5bcdeb5bad36239bef981a1b982b216877bab99876ec036176fc" exitCode=0 Mar 20 08:58:04 crc kubenswrapper[5136]: I0320 08:58:04.126493 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" event={"ID":"c9b36af1-10a6-412b-a488-892560533fbc","Type":"ContainerDied","Data":"95c081e05a9b5bcdeb5bad36239bef981a1b982b216877bab99876ec036176fc"} Mar 20 08:58:05 crc kubenswrapper[5136]: I0320 08:58:05.582499 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:05 crc kubenswrapper[5136]: I0320 08:58:05.619283 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdjjf\" (UniqueName: \"kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf\") pod \"c9b36af1-10a6-412b-a488-892560533fbc\" (UID: \"c9b36af1-10a6-412b-a488-892560533fbc\") " Mar 20 08:58:05 crc kubenswrapper[5136]: I0320 08:58:05.629331 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf" (OuterVolumeSpecName: "kube-api-access-zdjjf") pod "c9b36af1-10a6-412b-a488-892560533fbc" (UID: "c9b36af1-10a6-412b-a488-892560533fbc"). InnerVolumeSpecName "kube-api-access-zdjjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:05 crc kubenswrapper[5136]: I0320 08:58:05.721959 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdjjf\" (UniqueName: \"kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.148173 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" event={"ID":"c9b36af1-10a6-412b-a488-892560533fbc","Type":"ContainerDied","Data":"848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f"} Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.148219 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f" Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.148229 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.250587 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566612-gmnm6"] Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.262171 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566612-gmnm6"] Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.406458 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="474fd165-50ec-4d02-9f52-eb18382cee27" path="/var/lib/kubelet/pods/474fd165-50ec-4d02-9f52-eb18382cee27/volumes" Mar 20 08:58:07 crc kubenswrapper[5136]: I0320 08:58:07.164436 5136 generic.go:334] "Generic (PLEG): container finished" podID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerID="a3cfce9ddd036c7cf63f0412122dd66290e1e5696f18bc8cd0c8f3ee086aeaaa" exitCode=0 Mar 20 08:58:07 crc kubenswrapper[5136]: I0320 08:58:07.164477 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerDied","Data":"a3cfce9ddd036c7cf63f0412122dd66290e1e5696f18bc8cd0c8f3ee086aeaaa"} Mar 20 08:58:08 crc kubenswrapper[5136]: I0320 08:58:08.181418 5136 generic.go:334] "Generic (PLEG): container finished" podID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerID="0f81420fdec828e5dce02cec8a66a0c6cd8324009373b1662fde78097754c972" exitCode=0 Mar 20 08:58:08 crc kubenswrapper[5136]: I0320 08:58:08.181460 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerDied","Data":"0f81420fdec828e5dce02cec8a66a0c6cd8324009373b1662fde78097754c972"} Mar 20 08:58:08 crc kubenswrapper[5136]: I0320 08:58:08.405225 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:58:08 crc kubenswrapper[5136]: E0320 08:58:08.405478 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:58:09 crc kubenswrapper[5136]: I0320 08:58:09.194679 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbsbr" event={"ID":"221d005e-2b68-4835-9bcc-69b3d391e37f","Type":"ContainerStarted","Data":"c065757c937f22da9e8b7667e94c1c8c670b21bd25f6581d8d73ba92f6d8453a"} Mar 20 08:58:10 crc kubenswrapper[5136]: I0320 08:58:10.211198 5136 generic.go:334] "Generic (PLEG): container finished" podID="221d005e-2b68-4835-9bcc-69b3d391e37f" containerID="c065757c937f22da9e8b7667e94c1c8c670b21bd25f6581d8d73ba92f6d8453a" exitCode=0 Mar 20 08:58:10 crc kubenswrapper[5136]: I0320 08:58:10.211239 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbsbr" event={"ID":"221d005e-2b68-4835-9bcc-69b3d391e37f","Type":"ContainerDied","Data":"c065757c937f22da9e8b7667e94c1c8c670b21bd25f6581d8d73ba92f6d8453a"} Mar 20 08:58:11 crc kubenswrapper[5136]: I0320 08:58:11.225489 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerStarted","Data":"20f8a9a945087915c09ba9f6c5bb3fad1e06a23db6077bf675c7ee359a2b9ea4"} Mar 20 08:58:12 crc kubenswrapper[5136]: I0320 08:58:12.237684 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbsbr" event={"ID":"221d005e-2b68-4835-9bcc-69b3d391e37f","Type":"ContainerStarted","Data":"066ba98012246429f2094fb53db95c9626683ff7906a87b660df9b50aa1b5b85"} Mar 20 08:58:13 crc kubenswrapper[5136]: I0320 08:58:13.978618 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" probeResult="failure" output=< Mar 20 08:58:13 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:58:13 crc kubenswrapper[5136]: > Mar 20 08:58:14 crc kubenswrapper[5136]: I0320 08:58:14.259436 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerStarted","Data":"c48b0b35287dd607ba880a1efbf0d79170312ce43d17777e16e04be5b17bbe8a"} Mar 20 08:58:14 crc kubenswrapper[5136]: I0320 08:58:14.259729 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Mar 20 08:58:14 crc kubenswrapper[5136]: I0320 08:58:14.263439 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Mar 20 08:58:14 crc kubenswrapper[5136]: I0320 08:58:14.286488 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lbsbr" podStartSLOduration=5.306366661 podStartE2EDuration="13.286463538s" podCreationTimestamp="2026-03-20 08:58:01 +0000 UTC" firstStartedPulling="2026-03-20 08:58:03.108498625 +0000 UTC m=+7715.367809776" lastFinishedPulling="2026-03-20 08:58:11.088595502 +0000 UTC m=+7723.347906653" observedRunningTime="2026-03-20 08:58:12.25748904 +0000 UTC m=+7724.516800191" watchObservedRunningTime="2026-03-20 08:58:14.286463538 +0000 UTC m=+7726.545774689" Mar 20 08:58:14 crc kubenswrapper[5136]: I0320 08:58:14.288163 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.440450972 podStartE2EDuration="20.288154731s" podCreationTimestamp="2026-03-20 08:57:54 +0000 UTC" firstStartedPulling="2026-03-20 08:57:55.886941764 +0000 UTC m=+7708.146252915" lastFinishedPulling="2026-03-20 08:58:10.734645523 +0000 UTC m=+7722.993956674" observedRunningTime="2026-03-20 08:58:14.282883918 +0000 UTC m=+7726.542195069" watchObservedRunningTime="2026-03-20 08:58:14.288154731 +0000 UTC m=+7726.547465882" Mar 20 08:58:19 crc kubenswrapper[5136]: I0320 08:58:19.307772 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerStarted","Data":"c4592edf336fdf2500951f3bf6021b40ab54c666d60df8a359a5c1fd53d8e319"} Mar 20 08:58:20 crc kubenswrapper[5136]: I0320 08:58:20.398505 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:58:20 crc kubenswrapper[5136]: E0320 08:58:20.399031 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:58:21 crc kubenswrapper[5136]: I0320 08:58:21.329127 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerStarted","Data":"4558c33f4de2c378dedbd4356bb40a205875151331a5b77dc10d84ca3825805f"} Mar 20 08:58:21 crc kubenswrapper[5136]: I0320 08:58:21.775244 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:21 crc kubenswrapper[5136]: I0320 08:58:21.775986 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:21 crc kubenswrapper[5136]: I0320 08:58:21.822495 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:22 crc kubenswrapper[5136]: I0320 08:58:22.386289 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:22 crc kubenswrapper[5136]: I0320 08:58:22.467139 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbsbr"] Mar 20 08:58:22 crc kubenswrapper[5136]: I0320 08:58:22.511379 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 08:58:22 crc kubenswrapper[5136]: I0320 08:58:22.511613 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dmmv5" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="registry-server" containerID="cri-o://433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b" gracePeriod=2 Mar 20 08:58:22 crc kubenswrapper[5136]: I0320 08:58:22.989670 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.060449 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.133348 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.210239 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities\") pod \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.210423 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92f6b\" (UniqueName: \"kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b\") pod \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.210543 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content\") pod \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.211229 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities" (OuterVolumeSpecName: "utilities") pod "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" (UID: "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.223110 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b" (OuterVolumeSpecName: "kube-api-access-92f6b") pod "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" (UID: "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b"). InnerVolumeSpecName "kube-api-access-92f6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.295482 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" (UID: "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.312723 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92f6b\" (UniqueName: \"kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.312756 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.312765 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.352463 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerID="433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b" exitCode=0 Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.352911 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerDied","Data":"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b"} Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.353000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerDied","Data":"cffe9a0608630e68ebe445b4b73ca250c67588ca57233ed2a5a8f8aeafc8a8ef"} Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.352998 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.353047 5136 scope.go:117] "RemoveContainer" containerID="433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.387311 5136 scope.go:117] "RemoveContainer" containerID="72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.394605 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.403830 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.415716 5136 scope.go:117] "RemoveContainer" containerID="360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.459368 5136 scope.go:117] "RemoveContainer" containerID="433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b" Mar 20 08:58:23 crc kubenswrapper[5136]: E0320 08:58:23.459847 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b\": container with ID starting with 433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b not found: ID does not exist" containerID="433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.459889 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b"} err="failed to get container status \"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b\": rpc error: code = NotFound desc = could not find container \"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b\": container with ID starting with 433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b not found: ID does not exist" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.459913 5136 scope.go:117] "RemoveContainer" containerID="72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd" Mar 20 08:58:23 crc kubenswrapper[5136]: E0320 08:58:23.460428 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd\": container with ID starting with 72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd not found: ID does not exist" containerID="72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.460447 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd"} err="failed to get container status \"72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd\": rpc error: code = NotFound desc = could not find container \"72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd\": container with ID starting with 72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd not found: ID does not exist" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.460460 5136 scope.go:117] "RemoveContainer" containerID="360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677" Mar 20 08:58:23 crc kubenswrapper[5136]: E0320 08:58:23.460694 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677\": container with ID starting with 360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677 not found: ID does not exist" containerID="360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.460724 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677"} err="failed to get container status \"360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677\": rpc error: code = NotFound desc = could not find container \"360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677\": container with ID starting with 360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677 not found: ID does not exist" Mar 20 08:58:24 crc kubenswrapper[5136]: I0320 08:58:24.407033 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" path="/var/lib/kubelet/pods/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b/volumes" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.273770 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.274261 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" containerID="cri-o://a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1" gracePeriod=2 Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.387565 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerStarted","Data":"900d4676bbda872999819d15b70a4cf6ea3c9fa4b3672549e4f5c0f54e67afed"} Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.414932 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.276671029 podStartE2EDuration="31.414914058s" podCreationTimestamp="2026-03-20 08:57:54 +0000 UTC" firstStartedPulling="2026-03-20 08:57:56.814615314 +0000 UTC m=+7709.073926465" lastFinishedPulling="2026-03-20 08:58:24.952858343 +0000 UTC m=+7737.212169494" observedRunningTime="2026-03-20 08:58:25.409291604 +0000 UTC m=+7737.668602755" watchObservedRunningTime="2026-03-20 08:58:25.414914058 +0000 UTC m=+7737.674225209" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.802898 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.860107 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncj89\" (UniqueName: \"kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89\") pod \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.860208 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities\") pod \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.860267 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content\") pod \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.861039 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities" (OuterVolumeSpecName: "utilities") pod "fbbd891d-c8eb-404c-8255-2a3bba4035ee" (UID: "fbbd891d-c8eb-404c-8255-2a3bba4035ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.865799 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89" (OuterVolumeSpecName: "kube-api-access-ncj89") pod "fbbd891d-c8eb-404c-8255-2a3bba4035ee" (UID: "fbbd891d-c8eb-404c-8255-2a3bba4035ee"). InnerVolumeSpecName "kube-api-access-ncj89". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.962401 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncj89\" (UniqueName: \"kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.962442 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.980528 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbbd891d-c8eb-404c-8255-2a3bba4035ee" (UID: "fbbd891d-c8eb-404c-8255-2a3bba4035ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.990550 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.990594 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.993390 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.064343 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.397545 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerID="a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1" exitCode=0 Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.397652 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.406701 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerDied","Data":"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1"} Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.409026 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerDied","Data":"b8ea34d57154026d7acfa78afb0581c846280208e141611a13e5cd82670eae89"} Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.409274 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.409114 5136 scope.go:117] "RemoveContainer" containerID="a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.455674 5136 scope.go:117] "RemoveContainer" containerID="9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.475409 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.527725 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.531961 5136 scope.go:117] "RemoveContainer" containerID="3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.562696 5136 scope.go:117] "RemoveContainer" containerID="a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1" Mar 20 08:58:26 crc kubenswrapper[5136]: E0320 08:58:26.563371 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1\": container with ID starting with a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1 not found: ID does not exist" containerID="a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.563401 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1"} err="failed to get container status \"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1\": rpc error: code = NotFound desc = could not find container \"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1\": container with ID starting with a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1 not found: ID does not exist" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.563428 5136 scope.go:117] "RemoveContainer" containerID="9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515" Mar 20 08:58:26 crc kubenswrapper[5136]: E0320 08:58:26.568127 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515\": container with ID starting with 9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515 not found: ID does not exist" containerID="9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.568172 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515"} err="failed to get container status \"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515\": rpc error: code = NotFound desc = could not find container \"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515\": container with ID starting with 9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515 not found: ID does not exist" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.568199 5136 scope.go:117] "RemoveContainer" containerID="3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd" Mar 20 08:58:26 crc kubenswrapper[5136]: E0320 08:58:26.568613 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd\": container with ID starting with 3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd not found: ID does not exist" containerID="3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.568648 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd"} err="failed to get container status \"3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd\": rpc error: code = NotFound desc = could not find container \"3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd\": container with ID starting with 3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd not found: ID does not exist" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.193563 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.194160 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="85e488a7-477c-4368-a461-725ccdc6987e" containerName="openstackclient" containerID="cri-o://c0b4485ed3a40b504bf79337010ee963ceceae4d07ffa4c5f67bc76669810e3e" gracePeriod=2 Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.212465 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.223943 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224430 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="extract-content" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224449 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="extract-content" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224461 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="extract-utilities" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224467 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="extract-utilities" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224474 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224481 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224495 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e488a7-477c-4368-a461-725ccdc6987e" containerName="openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224501 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e488a7-477c-4368-a461-725ccdc6987e" containerName="openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224511 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b36af1-10a6-412b-a488-892560533fbc" containerName="oc" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224517 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b36af1-10a6-412b-a488-892560533fbc" containerName="oc" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224527 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="extract-utilities" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224533 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="extract-utilities" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224546 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224552 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224572 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="extract-content" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224578 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="extract-content" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224765 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224779 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e488a7-477c-4368-a461-725ccdc6987e" containerName="openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224789 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b36af1-10a6-412b-a488-892560533fbc" containerName="oc" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224799 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.225781 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.236382 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.254173 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="85e488a7-477c-4368-a461-725ccdc6987e" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.306957 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.307002 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.307042 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4j4n\" (UniqueName: \"kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.307499 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.408684 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4j4n\" (UniqueName: \"kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.408809 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.408879 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.408899 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.409794 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.411023 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" path="/var/lib/kubelet/pods/fbbd891d-c8eb-404c-8255-2a3bba4035ee/volumes" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.414532 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.419564 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.432502 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4j4n\" (UniqueName: \"kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.547170 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.055017 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:58:29 crc kubenswrapper[5136]: W0320 08:58:29.055284 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cefd58c_a889_4893_aa87_b106eae1c7ad.slice/crio-27f5cb228076724219a2f0fe834d5acfadd46e315bf55e73db187a6092a16d5c WatchSource:0}: Error finding container 27f5cb228076724219a2f0fe834d5acfadd46e315bf55e73db187a6092a16d5c: Status 404 returned error can't find the container with id 27f5cb228076724219a2f0fe834d5acfadd46e315bf55e73db187a6092a16d5c Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.429725 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9cefd58c-a889-4893-aa87-b106eae1c7ad","Type":"ContainerStarted","Data":"80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423"} Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.430059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9cefd58c-a889-4893-aa87-b106eae1c7ad","Type":"ContainerStarted","Data":"27f5cb228076724219a2f0fe834d5acfadd46e315bf55e73db187a6092a16d5c"} Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.447734 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.447714836 podStartE2EDuration="1.447714836s" podCreationTimestamp="2026-03-20 08:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:58:29.444676811 +0000 UTC m=+7741.703987962" watchObservedRunningTime="2026-03-20 08:58:29.447714836 +0000 UTC m=+7741.707025987" Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.823700 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.824002 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="prometheus" containerID="cri-o://c4592edf336fdf2500951f3bf6021b40ab54c666d60df8a359a5c1fd53d8e319" gracePeriod=600 Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.824160 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="thanos-sidecar" containerID="cri-o://900d4676bbda872999819d15b70a4cf6ea3c9fa4b3672549e4f5c0f54e67afed" gracePeriod=600 Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.824160 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="config-reloader" containerID="cri-o://4558c33f4de2c378dedbd4356bb40a205875151331a5b77dc10d84ca3825805f" gracePeriod=600 Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.439671 5136 generic.go:334] "Generic (PLEG): container finished" podID="85e488a7-477c-4368-a461-725ccdc6987e" containerID="c0b4485ed3a40b504bf79337010ee963ceceae4d07ffa4c5f67bc76669810e3e" exitCode=137 Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442707 5136 generic.go:334] "Generic (PLEG): container finished" podID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerID="900d4676bbda872999819d15b70a4cf6ea3c9fa4b3672549e4f5c0f54e67afed" exitCode=0 Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442733 5136 generic.go:334] "Generic (PLEG): container finished" podID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerID="4558c33f4de2c378dedbd4356bb40a205875151331a5b77dc10d84ca3825805f" exitCode=0 Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442742 5136 generic.go:334] "Generic (PLEG): container finished" podID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerID="c4592edf336fdf2500951f3bf6021b40ab54c666d60df8a359a5c1fd53d8e319" exitCode=0 Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442741 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerDied","Data":"900d4676bbda872999819d15b70a4cf6ea3c9fa4b3672549e4f5c0f54e67afed"} Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442789 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerDied","Data":"4558c33f4de2c378dedbd4356bb40a205875151331a5b77dc10d84ca3825805f"} Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442803 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerDied","Data":"c4592edf336fdf2500951f3bf6021b40ab54c666d60df8a359a5c1fd53d8e319"} Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.553831 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.560057 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="85e488a7-477c-4368-a461-725ccdc6987e" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.653481 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle\") pod \"85e488a7-477c-4368-a461-725ccdc6987e\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.653558 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret\") pod \"85e488a7-477c-4368-a461-725ccdc6987e\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.653593 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgcpd\" (UniqueName: \"kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd\") pod \"85e488a7-477c-4368-a461-725ccdc6987e\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.653739 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config\") pod \"85e488a7-477c-4368-a461-725ccdc6987e\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.700586 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd" (OuterVolumeSpecName: "kube-api-access-tgcpd") pod "85e488a7-477c-4368-a461-725ccdc6987e" (UID: "85e488a7-477c-4368-a461-725ccdc6987e"). InnerVolumeSpecName "kube-api-access-tgcpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.782456 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgcpd\" (UniqueName: \"kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.803631 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "85e488a7-477c-4368-a461-725ccdc6987e" (UID: "85e488a7-477c-4368-a461-725ccdc6987e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.832030 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85e488a7-477c-4368-a461-725ccdc6987e" (UID: "85e488a7-477c-4368-a461-725ccdc6987e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.870740 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "85e488a7-477c-4368-a461-725ccdc6987e" (UID: "85e488a7-477c-4368-a461-725ccdc6987e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.887725 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.887765 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.887787 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.940607 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.988705 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.988769 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989003 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989029 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989080 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989112 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989177 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989221 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdd45\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989251 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989322 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.990843 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.994303 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.995701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.996153 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out" (OuterVolumeSpecName: "config-out") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.996289 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config" (OuterVolumeSpecName: "config") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.996677 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45" (OuterVolumeSpecName: "kube-api-access-mdd45") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "kube-api-access-mdd45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.996793 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.996875 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.024156 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config" (OuterVolumeSpecName: "web-config") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.024196 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "pvc-d394ecb7-8fc1-4066-a753-5896ab167a34". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.095085 5136 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.095123 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.095177 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") on node \"crc\" " Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.095194 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.095212 5136 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.104912 5136 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.104926 5136 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.104936 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdd45\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.104947 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.104956 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.124027 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.124373 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d394ecb7-8fc1-4066-a753-5896ab167a34" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34") on node "crc" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.206724 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.453398 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.453403 5136 scope.go:117] "RemoveContainer" containerID="c0b4485ed3a40b504bf79337010ee963ceceae4d07ffa4c5f67bc76669810e3e" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.458155 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="85e488a7-477c-4368-a461-725ccdc6987e" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.461312 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerDied","Data":"4ab763507cd2c9da05be42fe8e977f93c3eb5200fdec5c434fb058393a4e3e30"} Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.461427 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.535104 5136 scope.go:117] "RemoveContainer" containerID="900d4676bbda872999819d15b70a4cf6ea3c9fa4b3672549e4f5c0f54e67afed" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.543206 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="85e488a7-477c-4368-a461-725ccdc6987e" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.543484 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.550138 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.578107 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:31 crc kubenswrapper[5136]: E0320 08:58:31.578625 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="init-config-reloader" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.578638 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="init-config-reloader" Mar 20 08:58:31 crc kubenswrapper[5136]: E0320 08:58:31.578699 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="thanos-sidecar" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.578709 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="thanos-sidecar" Mar 20 08:58:31 crc kubenswrapper[5136]: E0320 08:58:31.578739 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="prometheus" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.578746 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="prometheus" Mar 20 08:58:31 crc kubenswrapper[5136]: E0320 08:58:31.578785 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="config-reloader" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.578792 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="config-reloader" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.579038 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="thanos-sidecar" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.579090 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="config-reloader" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.579104 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="prometheus" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.579776 5136 scope.go:117] "RemoveContainer" containerID="4558c33f4de2c378dedbd4356bb40a205875151331a5b77dc10d84ca3825805f" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.581366 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.589184 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591121 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591286 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591313 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591424 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591609 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jt99d" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591763 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.592461 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.607461 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.612439 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.638349 5136 scope.go:117] "RemoveContainer" containerID="c4592edf336fdf2500951f3bf6021b40ab54c666d60df8a359a5c1fd53d8e319" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.685906 5136 scope.go:117] "RemoveContainer" containerID="0f81420fdec828e5dce02cec8a66a0c6cd8324009373b1662fde78097754c972" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741139 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741360 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741409 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741449 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741473 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741500 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741782 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62847\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741875 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741952 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.742691 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.742750 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.742784 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.742875 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844692 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62847\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844732 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844761 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844798 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844857 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844879 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844932 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844956 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.845003 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.845044 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.845070 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.845095 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.846659 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.847053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.847271 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.851362 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.851891 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.852054 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.852300 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.852333 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad804f31e72686a367ca365b9ecb0de79de25c176ca5187be7f65bd43ec38926/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.861297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.861233 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.862520 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.863187 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.863991 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62847\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.867283 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.889966 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.957241 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:32 crc kubenswrapper[5136]: I0320 08:58:32.407071 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e488a7-477c-4368-a461-725ccdc6987e" path="/var/lib/kubelet/pods/85e488a7-477c-4368-a461-725ccdc6987e/volumes" Mar 20 08:58:32 crc kubenswrapper[5136]: I0320 08:58:32.408169 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" path="/var/lib/kubelet/pods/9f27cf85-7e28-48aa-b93a-0647c59a7dcc/volumes" Mar 20 08:58:32 crc kubenswrapper[5136]: I0320 08:58:32.439240 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:32 crc kubenswrapper[5136]: I0320 08:58:32.483471 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerStarted","Data":"441fbbe54f0f16d0c91d190250a9aea863086641a75258b3146f19278093050a"} Mar 20 08:58:33 crc kubenswrapper[5136]: I0320 08:58:33.396402 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:58:33 crc kubenswrapper[5136]: E0320 08:58:33.397130 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.054800 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-vgnkq"] Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.063493 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b8c9-account-create-update-8v2dt"] Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.072385 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-vgnkq"] Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.083116 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b8c9-account-create-update-8v2dt"] Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.415935 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d0704b-80fd-44fe-9007-2971cc8a6cf6" path="/var/lib/kubelet/pods/63d0704b-80fd-44fe-9007-2971cc8a6cf6/volumes" Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.416730 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0863275-620b-4bea-a747-135c323ebb6f" path="/var/lib/kubelet/pods/f0863275-620b-4bea-a747-135c323ebb6f/volumes" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.143612 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.145833 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.148866 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.163117 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.169428 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239710 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239833 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239874 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239932 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239961 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239994 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.240068 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gxrt\" (UniqueName: \"kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.341267 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342214 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342319 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gxrt\" (UniqueName: \"kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342369 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342464 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342506 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342571 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342735 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.343268 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.347394 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.347678 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.348054 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.348408 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.361583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gxrt\" (UniqueName: \"kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.466706 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.521754 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerStarted","Data":"9a1c12a00febf49412d459684283c5a7557c491ef07270676850c0c1b6e79a69"} Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.986141 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:58:35 crc kubenswrapper[5136]: W0320 08:58:35.996124 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc92a2e9_70bb_400b_ab37_3e17b334a8de.slice/crio-e06e56d6f6d6b7a4d88b163ff2bdce81afa10713d484b4f149a44013355b9963 WatchSource:0}: Error finding container e06e56d6f6d6b7a4d88b163ff2bdce81afa10713d484b4f149a44013355b9963: Status 404 returned error can't find the container with id e06e56d6f6d6b7a4d88b163ff2bdce81afa10713d484b4f149a44013355b9963 Mar 20 08:58:36 crc kubenswrapper[5136]: I0320 08:58:36.533764 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerStarted","Data":"e06e56d6f6d6b7a4d88b163ff2bdce81afa10713d484b4f149a44013355b9963"} Mar 20 08:58:40 crc kubenswrapper[5136]: I0320 08:58:40.528326 5136 scope.go:117] "RemoveContainer" containerID="ba829091226a089834672fdb8aaa0264ffcab6218d4874fe20d15ed41e821de5" Mar 20 08:58:40 crc kubenswrapper[5136]: I0320 08:58:40.574066 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerStarted","Data":"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9"} Mar 20 08:58:40 crc kubenswrapper[5136]: I0320 08:58:40.594948 5136 scope.go:117] "RemoveContainer" containerID="02bf2fddb0787ba56f7a7d4d2929f25e0b16aff46d2b34aac1bc69f87f328612" Mar 20 08:58:40 crc kubenswrapper[5136]: I0320 08:58:40.715192 5136 scope.go:117] "RemoveContainer" containerID="6d85db0ede2cb37b721e22824a2dda96a152a59cfb86afea2b68c0eedbe79e58" Mar 20 08:58:41 crc kubenswrapper[5136]: I0320 08:58:41.584921 5136 generic.go:334] "Generic (PLEG): container finished" podID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerID="9a1c12a00febf49412d459684283c5a7557c491ef07270676850c0c1b6e79a69" exitCode=0 Mar 20 08:58:41 crc kubenswrapper[5136]: I0320 08:58:41.585006 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerDied","Data":"9a1c12a00febf49412d459684283c5a7557c491ef07270676850c0c1b6e79a69"} Mar 20 08:58:41 crc kubenswrapper[5136]: I0320 08:58:41.591278 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerStarted","Data":"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192"} Mar 20 08:58:42 crc kubenswrapper[5136]: I0320 08:58:42.601972 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerStarted","Data":"10b1e7850200894bb4a977e72087deccfe8ba698c99b51dc2f4eca430f875e7b"} Mar 20 08:58:42 crc kubenswrapper[5136]: I0320 08:58:42.604512 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerStarted","Data":"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59"} Mar 20 08:58:44 crc kubenswrapper[5136]: I0320 08:58:44.643167 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerStarted","Data":"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3"} Mar 20 08:58:44 crc kubenswrapper[5136]: I0320 08:58:44.643788 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 08:58:44 crc kubenswrapper[5136]: I0320 08:58:44.673365 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9891667210000001 podStartE2EDuration="9.673339005s" podCreationTimestamp="2026-03-20 08:58:35 +0000 UTC" firstStartedPulling="2026-03-20 08:58:36.001542124 +0000 UTC m=+7748.260853265" lastFinishedPulling="2026-03-20 08:58:43.685714398 +0000 UTC m=+7755.945025549" observedRunningTime="2026-03-20 08:58:44.672130238 +0000 UTC m=+7756.931441399" watchObservedRunningTime="2026-03-20 08:58:44.673339005 +0000 UTC m=+7756.932650156" Mar 20 08:58:45 crc kubenswrapper[5136]: I0320 08:58:45.659007 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerStarted","Data":"58ad47341d36382588ada0b35a93996f0bd176fcc8c2283488644a65072e6d6a"} Mar 20 08:58:45 crc kubenswrapper[5136]: I0320 08:58:45.659337 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerStarted","Data":"22f88ba3ba68ee1437a03f43cb79cd30dcc82808fe8518d87eaa412f688babdc"} Mar 20 08:58:45 crc kubenswrapper[5136]: I0320 08:58:45.704320 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.704299894 podStartE2EDuration="14.704299894s" podCreationTimestamp="2026-03-20 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:58:45.694101468 +0000 UTC m=+7757.953412629" watchObservedRunningTime="2026-03-20 08:58:45.704299894 +0000 UTC m=+7757.963611055" Mar 20 08:58:46 crc kubenswrapper[5136]: I0320 08:58:46.397574 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:58:46 crc kubenswrapper[5136]: E0320 08:58:46.397986 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:58:46 crc kubenswrapper[5136]: I0320 08:58:46.958351 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:46 crc kubenswrapper[5136]: I0320 08:58:46.959062 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:46 crc kubenswrapper[5136]: I0320 08:58:46.974377 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:47 crc kubenswrapper[5136]: I0320 08:58:47.690990 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:48 crc kubenswrapper[5136]: I0320 08:58:48.843796 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-w7sqw"] Mar 20 08:58:48 crc kubenswrapper[5136]: I0320 08:58:48.845665 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.606486 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.606543 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzxx\" (UniqueName: \"kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.682667 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-35ea-account-create-update-6jb9f"] Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.697431 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.700578 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.711642 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzthn\" (UniqueName: \"kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.711758 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.711793 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzxx\" (UniqueName: \"kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.711873 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.739648 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.762687 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzxx\" (UniqueName: \"kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.766393 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-35ea-account-create-update-6jb9f"] Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.773958 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.780025 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-w7sqw"] Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.813954 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzthn\" (UniqueName: \"kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.814086 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.814745 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.834765 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzthn\" (UniqueName: \"kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.055836 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.354597 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-w7sqw"] Mar 20 08:58:50 crc kubenswrapper[5136]: W0320 08:58:50.712628 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9b2a0f1_96d1_4edc_a219_60194a2bf4b9.slice/crio-4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270 WatchSource:0}: Error finding container 4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270: Status 404 returned error can't find the container with id 4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270 Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.715982 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-35ea-account-create-update-6jb9f"] Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.723924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-w7sqw" event={"ID":"70745a35-fe6f-4248-ac87-970763afe00e","Type":"ContainerStarted","Data":"3a4f7e90e6acf592c84321f18597eeea1a3c43546eaea8fecf69996c5f79ba99"} Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.723977 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-w7sqw" event={"ID":"70745a35-fe6f-4248-ac87-970763afe00e","Type":"ContainerStarted","Data":"552f2828cf75b55f8bdebd9a56db3e613c667737eda344a9f955eb21c4de5edf"} Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.746673 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-w7sqw" podStartSLOduration=2.746651677 podStartE2EDuration="2.746651677s" podCreationTimestamp="2026-03-20 08:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:58:50.739342471 +0000 UTC m=+7762.998653622" watchObservedRunningTime="2026-03-20 08:58:50.746651677 +0000 UTC m=+7763.005962828" Mar 20 08:58:51 crc kubenswrapper[5136]: I0320 08:58:51.734451 5136 generic.go:334] "Generic (PLEG): container finished" podID="70745a35-fe6f-4248-ac87-970763afe00e" containerID="3a4f7e90e6acf592c84321f18597eeea1a3c43546eaea8fecf69996c5f79ba99" exitCode=0 Mar 20 08:58:51 crc kubenswrapper[5136]: I0320 08:58:51.734811 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-w7sqw" event={"ID":"70745a35-fe6f-4248-ac87-970763afe00e","Type":"ContainerDied","Data":"3a4f7e90e6acf592c84321f18597eeea1a3c43546eaea8fecf69996c5f79ba99"} Mar 20 08:58:51 crc kubenswrapper[5136]: I0320 08:58:51.736526 5136 generic.go:334] "Generic (PLEG): container finished" podID="a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" containerID="bbe438fbefca46d6264b55b57938c854859588f624d630a720b3f84f596f758f" exitCode=0 Mar 20 08:58:51 crc kubenswrapper[5136]: I0320 08:58:51.736568 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-35ea-account-create-update-6jb9f" event={"ID":"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9","Type":"ContainerDied","Data":"bbe438fbefca46d6264b55b57938c854859588f624d630a720b3f84f596f758f"} Mar 20 08:58:51 crc kubenswrapper[5136]: I0320 08:58:51.736593 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-35ea-account-create-update-6jb9f" event={"ID":"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9","Type":"ContainerStarted","Data":"4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270"} Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.156962 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.160576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.297778 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gzxx\" (UniqueName: \"kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx\") pod \"70745a35-fe6f-4248-ac87-970763afe00e\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.297905 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzthn\" (UniqueName: \"kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn\") pod \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.298023 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts\") pod \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.298060 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts\") pod \"70745a35-fe6f-4248-ac87-970763afe00e\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.298970 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70745a35-fe6f-4248-ac87-970763afe00e" (UID: "70745a35-fe6f-4248-ac87-970763afe00e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.299177 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" (UID: "a9b2a0f1-96d1-4edc-a219-60194a2bf4b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.303245 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn" (OuterVolumeSpecName: "kube-api-access-bzthn") pod "a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" (UID: "a9b2a0f1-96d1-4edc-a219-60194a2bf4b9"). InnerVolumeSpecName "kube-api-access-bzthn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.303790 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx" (OuterVolumeSpecName: "kube-api-access-9gzxx") pod "70745a35-fe6f-4248-ac87-970763afe00e" (UID: "70745a35-fe6f-4248-ac87-970763afe00e"). InnerVolumeSpecName "kube-api-access-9gzxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.400614 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.400650 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.400660 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gzxx\" (UniqueName: \"kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.400670 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzthn\" (UniqueName: \"kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.760664 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.760659 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-35ea-account-create-update-6jb9f" event={"ID":"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9","Type":"ContainerDied","Data":"4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270"} Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.760881 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.763413 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-w7sqw" event={"ID":"70745a35-fe6f-4248-ac87-970763afe00e","Type":"ContainerDied","Data":"552f2828cf75b55f8bdebd9a56db3e613c667737eda344a9f955eb21c4de5edf"} Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.763476 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.763477 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="552f2828cf75b55f8bdebd9a56db3e613c667737eda344a9f955eb21c4de5edf" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.280082 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-wjckm"] Mar 20 08:58:59 crc kubenswrapper[5136]: E0320 08:58:59.280986 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" containerName="mariadb-account-create-update" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.281006 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" containerName="mariadb-account-create-update" Mar 20 08:58:59 crc kubenswrapper[5136]: E0320 08:58:59.281027 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70745a35-fe6f-4248-ac87-970763afe00e" containerName="mariadb-database-create" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.281035 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="70745a35-fe6f-4248-ac87-970763afe00e" containerName="mariadb-database-create" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.281252 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="70745a35-fe6f-4248-ac87-970763afe00e" containerName="mariadb-database-create" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.281276 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" containerName="mariadb-account-create-update" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.282122 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.283678 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.284529 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4k8x9" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.284577 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.284719 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.289184 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-wjckm"] Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.433066 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzx8\" (UniqueName: \"kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.433160 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.433228 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.433333 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.534894 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.534986 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvzx8\" (UniqueName: \"kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.535036 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.535126 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.542604 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.542938 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.545416 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.552121 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvzx8\" (UniqueName: \"kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.603232 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wjckm" Mar 20 08:59:00 crc kubenswrapper[5136]: W0320 08:59:00.036344 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9eddce1_1338_489a_b0e9_f008c33fea0f.slice/crio-4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d WatchSource:0}: Error finding container 4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d: Status 404 returned error can't find the container with id 4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.045913 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-wjckm"] Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.055172 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-mwt5p"] Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.065551 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-mwt5p"] Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.404099 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:59:00 crc kubenswrapper[5136]: E0320 08:59:00.406654 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.411711 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695202be-4633-411e-9afe-fd706e1cfbe6" path="/var/lib/kubelet/pods/695202be-4633-411e-9afe-fd706e1cfbe6/volumes" Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.832435 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wjckm" event={"ID":"f9eddce1-1338-489a-b0e9-f008c33fea0f","Type":"ContainerStarted","Data":"4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d"} Mar 20 08:59:05 crc kubenswrapper[5136]: I0320 08:59:05.475560 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 20 08:59:08 crc kubenswrapper[5136]: I0320 08:59:08.988854 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:08 crc kubenswrapper[5136]: I0320 08:59:08.989433 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" containerName="kube-state-metrics" containerID="cri-o://95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76" gracePeriod=30 Mar 20 08:59:09 crc kubenswrapper[5136]: I0320 08:59:09.499647 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:59:09 crc kubenswrapper[5136]: I0320 08:59:09.622167 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fc9n\" (UniqueName: \"kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n\") pod \"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac\" (UID: \"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac\") " Mar 20 08:59:09 crc kubenswrapper[5136]: I0320 08:59:09.634119 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n" (OuterVolumeSpecName: "kube-api-access-6fc9n") pod "ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" (UID: "ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac"). InnerVolumeSpecName "kube-api-access-6fc9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:09 crc kubenswrapper[5136]: I0320 08:59:09.724337 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fc9n\" (UniqueName: \"kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.134071 5136 generic.go:334] "Generic (PLEG): container finished" podID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" containerID="95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76" exitCode=2 Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.134121 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac","Type":"ContainerDied","Data":"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76"} Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.134156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac","Type":"ContainerDied","Data":"6ff5903a5cc26ba52c8004bed71ec6d792235b4e0088d5c041ffe70d1e0d7e6a"} Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.134165 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.134177 5136 scope.go:117] "RemoveContainer" containerID="95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.159189 5136 scope.go:117] "RemoveContainer" containerID="95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76" Mar 20 08:59:10 crc kubenswrapper[5136]: E0320 08:59:10.159670 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76\": container with ID starting with 95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76 not found: ID does not exist" containerID="95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.159709 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76"} err="failed to get container status \"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76\": rpc error: code = NotFound desc = could not find container \"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76\": container with ID starting with 95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76 not found: ID does not exist" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.186971 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.203511 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.214179 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:10 crc kubenswrapper[5136]: E0320 08:59:10.214676 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" containerName="kube-state-metrics" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.214692 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" containerName="kube-state-metrics" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.214966 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" containerName="kube-state-metrics" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.215806 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.223267 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.249580 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.249741 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.352896 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgdmf\" (UniqueName: \"kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.352976 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.353008 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.353337 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.412468 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" path="/var/lib/kubelet/pods/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac/volumes" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.455463 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.455523 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.455621 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.455689 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgdmf\" (UniqueName: \"kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.471899 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.472074 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.475914 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.477608 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgdmf\" (UniqueName: \"kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.571457 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.954201 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.956304 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-central-agent" containerID="cri-o://58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" gracePeriod=30 Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.957258 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="proxy-httpd" containerID="cri-o://64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" gracePeriod=30 Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.957370 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="sg-core" containerID="cri-o://3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" gracePeriod=30 Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.957481 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-notification-agent" containerID="cri-o://7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" gracePeriod=30 Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.068246 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.146753 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b","Type":"ContainerStarted","Data":"5287b5546ce2593541455a64057c115d269b5e5f8d4df65c154feacababa85d9"} Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.149440 5136 generic.go:334] "Generic (PLEG): container finished" podID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerID="64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" exitCode=0 Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.149469 5136 generic.go:334] "Generic (PLEG): container finished" podID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerID="3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" exitCode=2 Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.149519 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerDied","Data":"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3"} Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.149547 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerDied","Data":"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59"} Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.844555 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.985781 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gxrt\" (UniqueName: \"kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986271 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986347 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986402 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986490 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986574 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986613 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.988162 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.988805 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.998845 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts" (OuterVolumeSpecName: "scripts") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.998921 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt" (OuterVolumeSpecName: "kube-api-access-8gxrt") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "kube-api-access-8gxrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.025597 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.068774 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.083812 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data" (OuterVolumeSpecName: "config-data") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088597 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088629 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088638 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088648 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gxrt\" (UniqueName: \"kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088657 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088665 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088673 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168524 5136 generic.go:334] "Generic (PLEG): container finished" podID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerID="7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" exitCode=0 Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168563 5136 generic.go:334] "Generic (PLEG): container finished" podID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerID="58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" exitCode=0 Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168594 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168627 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerDied","Data":"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192"} Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168658 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerDied","Data":"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9"} Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168672 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerDied","Data":"e06e56d6f6d6b7a4d88b163ff2bdce81afa10713d484b4f149a44013355b9963"} Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168690 5136 scope.go:117] "RemoveContainer" containerID="64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.172744 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b","Type":"ContainerStarted","Data":"aa45ed833f1221b9bb131eddde949a0bc22ff821678a36dab8182db02d897f83"} Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.173122 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.193276 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.7664246860000001 podStartE2EDuration="2.19325369s" podCreationTimestamp="2026-03-20 08:59:10 +0000 UTC" firstStartedPulling="2026-03-20 08:59:11.086518626 +0000 UTC m=+7783.345829777" lastFinishedPulling="2026-03-20 08:59:11.51334763 +0000 UTC m=+7783.772658781" observedRunningTime="2026-03-20 08:59:12.191440924 +0000 UTC m=+7784.450752075" watchObservedRunningTime="2026-03-20 08:59:12.19325369 +0000 UTC m=+7784.452564841" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.196267 5136 scope.go:117] "RemoveContainer" containerID="3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.226072 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.240417 5136 scope.go:117] "RemoveContainer" containerID="7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.253125 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.286929 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.287549 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-central-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.287642 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-central-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.287707 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-notification-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.287757 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-notification-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.287968 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="sg-core" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288099 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="sg-core" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.288190 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="proxy-httpd" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288268 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="proxy-httpd" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288548 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="proxy-httpd" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288752 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-notification-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288961 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="sg-core" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.289098 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-central-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288792 5136 scope.go:117] "RemoveContainer" containerID="58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.291540 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.293224 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.296722 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.297018 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.298563 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317101 5136 scope.go:117] "RemoveContainer" containerID="64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.317431 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3\": container with ID starting with 64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3 not found: ID does not exist" containerID="64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317459 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3"} err="failed to get container status \"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3\": rpc error: code = NotFound desc = could not find container \"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3\": container with ID starting with 64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317479 5136 scope.go:117] "RemoveContainer" containerID="3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.317661 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59\": container with ID starting with 3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59 not found: ID does not exist" containerID="3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317676 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59"} err="failed to get container status \"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59\": rpc error: code = NotFound desc = could not find container \"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59\": container with ID starting with 3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317687 5136 scope.go:117] "RemoveContainer" containerID="7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.317861 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192\": container with ID starting with 7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192 not found: ID does not exist" containerID="7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317884 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192"} err="failed to get container status \"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192\": rpc error: code = NotFound desc = could not find container \"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192\": container with ID starting with 7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317899 5136 scope.go:117] "RemoveContainer" containerID="58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.318067 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9\": container with ID starting with 58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9 not found: ID does not exist" containerID="58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318088 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9"} err="failed to get container status \"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9\": rpc error: code = NotFound desc = could not find container \"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9\": container with ID starting with 58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318103 5136 scope.go:117] "RemoveContainer" containerID="64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318258 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3"} err="failed to get container status \"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3\": rpc error: code = NotFound desc = could not find container \"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3\": container with ID starting with 64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318273 5136 scope.go:117] "RemoveContainer" containerID="3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318680 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59"} err="failed to get container status \"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59\": rpc error: code = NotFound desc = could not find container \"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59\": container with ID starting with 3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318707 5136 scope.go:117] "RemoveContainer" containerID="7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.319086 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192"} err="failed to get container status \"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192\": rpc error: code = NotFound desc = could not find container \"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192\": container with ID starting with 7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.319100 5136 scope.go:117] "RemoveContainer" containerID="58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.319254 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9"} err="failed to get container status \"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9\": rpc error: code = NotFound desc = could not find container \"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9\": container with ID starting with 58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.397437 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.397543 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.397765 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.397884 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.398004 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.398071 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.398152 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qhqq\" (UniqueName: \"kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.398278 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.408023 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" path="/var/lib/kubelet/pods/dc92a2e9-70bb-400b-ab37-3e17b334a8de/volumes" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.500663 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.500728 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.500802 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.500916 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.500983 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.501052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.501097 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.501134 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qhqq\" (UniqueName: \"kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.501557 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.501588 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.506950 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.507265 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.507275 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.507266 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.515561 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.520261 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qhqq\" (UniqueName: \"kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.617380 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:15 crc kubenswrapper[5136]: I0320 08:59:15.203463 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wjckm" event={"ID":"f9eddce1-1338-489a-b0e9-f008c33fea0f","Type":"ContainerStarted","Data":"456904b2c9b2b9b900c280b5851fc80a26454d886df3722f5e23e7c54d551d62"} Mar 20 08:59:15 crc kubenswrapper[5136]: W0320 08:59:15.230338 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4790c11_3203_4f22_958f_a67c1242beb0.slice/crio-ee0fc0df5610cea45ffcc78247a58835bb3ee685c340c2aef6349fea9aee9ded WatchSource:0}: Error finding container ee0fc0df5610cea45ffcc78247a58835bb3ee685c340c2aef6349fea9aee9ded: Status 404 returned error can't find the container with id ee0fc0df5610cea45ffcc78247a58835bb3ee685c340c2aef6349fea9aee9ded Mar 20 08:59:15 crc kubenswrapper[5136]: I0320 08:59:15.232861 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:15 crc kubenswrapper[5136]: I0320 08:59:15.245768 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-wjckm" podStartSLOduration=1.450293445 podStartE2EDuration="16.245747067s" podCreationTimestamp="2026-03-20 08:58:59 +0000 UTC" firstStartedPulling="2026-03-20 08:59:00.039035422 +0000 UTC m=+7772.298346573" lastFinishedPulling="2026-03-20 08:59:14.834489044 +0000 UTC m=+7787.093800195" observedRunningTime="2026-03-20 08:59:15.220084132 +0000 UTC m=+7787.479395283" watchObservedRunningTime="2026-03-20 08:59:15.245747067 +0000 UTC m=+7787.505058238" Mar 20 08:59:15 crc kubenswrapper[5136]: I0320 08:59:15.398368 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:59:15 crc kubenswrapper[5136]: E0320 08:59:15.398932 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:59:16 crc kubenswrapper[5136]: I0320 08:59:16.217347 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerStarted","Data":"dfb994bb52968a72e193beb61fdb0d2ba31b06ee0870ee3288671db16099e8d8"} Mar 20 08:59:16 crc kubenswrapper[5136]: I0320 08:59:16.218232 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerStarted","Data":"94bbbcf0d712721420b13672832b679990ab864a6cb813e885e5d906666682e2"} Mar 20 08:59:16 crc kubenswrapper[5136]: I0320 08:59:16.218297 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerStarted","Data":"ee0fc0df5610cea45ffcc78247a58835bb3ee685c340c2aef6349fea9aee9ded"} Mar 20 08:59:17 crc kubenswrapper[5136]: I0320 08:59:17.227040 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerStarted","Data":"64db95b7d486d98993a1930605f414bffad7d70821f1b278f87f8d8dfcb81903"} Mar 20 08:59:17 crc kubenswrapper[5136]: I0320 08:59:17.229953 5136 generic.go:334] "Generic (PLEG): container finished" podID="f9eddce1-1338-489a-b0e9-f008c33fea0f" containerID="456904b2c9b2b9b900c280b5851fc80a26454d886df3722f5e23e7c54d551d62" exitCode=0 Mar 20 08:59:17 crc kubenswrapper[5136]: I0320 08:59:17.229983 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wjckm" event={"ID":"f9eddce1-1338-489a-b0e9-f008c33fea0f","Type":"ContainerDied","Data":"456904b2c9b2b9b900c280b5851fc80a26454d886df3722f5e23e7c54d551d62"} Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.593085 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wjckm" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.727672 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvzx8\" (UniqueName: \"kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8\") pod \"f9eddce1-1338-489a-b0e9-f008c33fea0f\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.728052 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts\") pod \"f9eddce1-1338-489a-b0e9-f008c33fea0f\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.728124 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data\") pod \"f9eddce1-1338-489a-b0e9-f008c33fea0f\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.728217 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle\") pod \"f9eddce1-1338-489a-b0e9-f008c33fea0f\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.733386 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts" (OuterVolumeSpecName: "scripts") pod "f9eddce1-1338-489a-b0e9-f008c33fea0f" (UID: "f9eddce1-1338-489a-b0e9-f008c33fea0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.747092 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8" (OuterVolumeSpecName: "kube-api-access-vvzx8") pod "f9eddce1-1338-489a-b0e9-f008c33fea0f" (UID: "f9eddce1-1338-489a-b0e9-f008c33fea0f"). InnerVolumeSpecName "kube-api-access-vvzx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.752830 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data" (OuterVolumeSpecName: "config-data") pod "f9eddce1-1338-489a-b0e9-f008c33fea0f" (UID: "f9eddce1-1338-489a-b0e9-f008c33fea0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.762033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9eddce1-1338-489a-b0e9-f008c33fea0f" (UID: "f9eddce1-1338-489a-b0e9-f008c33fea0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.830398 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvzx8\" (UniqueName: \"kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.830432 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.830441 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.830450 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.250737 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wjckm" Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.250754 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wjckm" event={"ID":"f9eddce1-1338-489a-b0e9-f008c33fea0f","Type":"ContainerDied","Data":"4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d"} Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.251203 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d" Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.255850 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerStarted","Data":"4cb2f797386081467747b2d3643bb044441191902352455ca9624fcb6ce13111"} Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.255994 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.292044 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.243443995 podStartE2EDuration="7.292025711s" podCreationTimestamp="2026-03-20 08:59:12 +0000 UTC" firstStartedPulling="2026-03-20 08:59:15.232585899 +0000 UTC m=+7787.491897060" lastFinishedPulling="2026-03-20 08:59:18.281167625 +0000 UTC m=+7790.540478776" observedRunningTime="2026-03-20 08:59:19.286776779 +0000 UTC m=+7791.546087940" watchObservedRunningTime="2026-03-20 08:59:19.292025711 +0000 UTC m=+7791.551336862" Mar 20 08:59:20 crc kubenswrapper[5136]: I0320 08:59:20.584644 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.257707 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 20 08:59:24 crc kubenswrapper[5136]: E0320 08:59:24.258570 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9eddce1-1338-489a-b0e9-f008c33fea0f" containerName="aodh-db-sync" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.258586 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9eddce1-1338-489a-b0e9-f008c33fea0f" containerName="aodh-db-sync" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.258886 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9eddce1-1338-489a-b0e9-f008c33fea0f" containerName="aodh-db-sync" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.261724 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.266608 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4k8x9" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.266948 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.267204 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.275151 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.351500 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.351630 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpx8b\" (UniqueName: \"kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.351691 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.351716 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.453868 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.453942 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.454120 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.454150 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpx8b\" (UniqueName: \"kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.459132 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.460363 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.465152 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.474734 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpx8b\" (UniqueName: \"kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.591978 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 08:59:25 crc kubenswrapper[5136]: I0320 08:59:25.098590 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 20 08:59:25 crc kubenswrapper[5136]: I0320 08:59:25.325750 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerStarted","Data":"d8bdfeca9bd1597fa3d2bc3b892eb75e23fce5575693634908f1e50575aa3005"} Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.303317 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.303949 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="proxy-httpd" containerID="cri-o://4cb2f797386081467747b2d3643bb044441191902352455ca9624fcb6ce13111" gracePeriod=30 Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.304068 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-notification-agent" containerID="cri-o://dfb994bb52968a72e193beb61fdb0d2ba31b06ee0870ee3288671db16099e8d8" gracePeriod=30 Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.304065 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="sg-core" containerID="cri-o://64db95b7d486d98993a1930605f414bffad7d70821f1b278f87f8d8dfcb81903" gracePeriod=30 Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.304796 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-central-agent" containerID="cri-o://94bbbcf0d712721420b13672832b679990ab864a6cb813e885e5d906666682e2" gracePeriod=30 Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.338303 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerStarted","Data":"92ea8495e9dfbb89165f77881cd9b84fab88074bc3bb11952d375d651c22c915"} Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.364579 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4790c11-3203-4f22-958f-a67c1242beb0" containerID="4cb2f797386081467747b2d3643bb044441191902352455ca9624fcb6ce13111" exitCode=0 Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.366536 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4790c11-3203-4f22-958f-a67c1242beb0" containerID="64db95b7d486d98993a1930605f414bffad7d70821f1b278f87f8d8dfcb81903" exitCode=2 Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.366605 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4790c11-3203-4f22-958f-a67c1242beb0" containerID="dfb994bb52968a72e193beb61fdb0d2ba31b06ee0870ee3288671db16099e8d8" exitCode=0 Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.366678 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4790c11-3203-4f22-958f-a67c1242beb0" containerID="94bbbcf0d712721420b13672832b679990ab864a6cb813e885e5d906666682e2" exitCode=0 Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.364633 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerDied","Data":"4cb2f797386081467747b2d3643bb044441191902352455ca9624fcb6ce13111"} Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.366879 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerDied","Data":"64db95b7d486d98993a1930605f414bffad7d70821f1b278f87f8d8dfcb81903"} Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.366959 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerDied","Data":"dfb994bb52968a72e193beb61fdb0d2ba31b06ee0870ee3288671db16099e8d8"} Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.367021 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerDied","Data":"94bbbcf0d712721420b13672832b679990ab864a6cb813e885e5d906666682e2"} Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.439598 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618033 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qhqq\" (UniqueName: \"kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618098 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618121 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618190 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618242 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618288 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618505 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618351 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618788 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.619023 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.619596 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.619619 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.622620 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts" (OuterVolumeSpecName: "scripts") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.626949 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq" (OuterVolumeSpecName: "kube-api-access-2qhqq") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "kube-api-access-2qhqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.651805 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.708166 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.722572 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.722616 5136 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.722630 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qhqq\" (UniqueName: \"kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.722641 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.725993 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.760096 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data" (OuterVolumeSpecName: "config-data") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.824293 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.824336 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.283447 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.381624 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.381634 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerDied","Data":"ee0fc0df5610cea45ffcc78247a58835bb3ee685c340c2aef6349fea9aee9ded"} Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.381679 5136 scope.go:117] "RemoveContainer" containerID="4cb2f797386081467747b2d3643bb044441191902352455ca9624fcb6ce13111" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.388564 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerStarted","Data":"42609fe8611cecf81531dc42f446b934dedc3a17199479496c8429f2b36967fa"} Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.406093 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:59:28 crc kubenswrapper[5136]: E0320 08:59:28.406457 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.546449 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.564006 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.573716 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:28 crc kubenswrapper[5136]: E0320 08:59:28.574183 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="proxy-httpd" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574195 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="proxy-httpd" Mar 20 08:59:28 crc kubenswrapper[5136]: E0320 08:59:28.574213 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="sg-core" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574220 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="sg-core" Mar 20 08:59:28 crc kubenswrapper[5136]: E0320 08:59:28.574231 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-notification-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574238 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-notification-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: E0320 08:59:28.574253 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-central-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574259 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-central-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574455 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-central-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574467 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="sg-core" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574659 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-notification-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574677 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="proxy-httpd" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.583011 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.585920 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.586017 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.586100 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.591401 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.623117 5136 scope.go:117] "RemoveContainer" containerID="64db95b7d486d98993a1930605f414bffad7d70821f1b278f87f8d8dfcb81903" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.673246 5136 scope.go:117] "RemoveContainer" containerID="dfb994bb52968a72e193beb61fdb0d2ba31b06ee0870ee3288671db16099e8d8" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.694377 5136 scope.go:117] "RemoveContainer" containerID="94bbbcf0d712721420b13672832b679990ab864a6cb813e885e5d906666682e2" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748177 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748210 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2srg\" (UniqueName: \"kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748291 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748318 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748350 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748375 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748410 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748450 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850395 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850459 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850490 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850516 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850556 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850610 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850686 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850707 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2srg\" (UniqueName: \"kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850829 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.851332 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.854371 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.854529 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.856273 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.859760 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.868203 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.868864 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2srg\" (UniqueName: \"kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.911567 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:29 crc kubenswrapper[5136]: I0320 08:59:29.397027 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:29 crc kubenswrapper[5136]: I0320 08:59:29.436987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerStarted","Data":"e4d81d5cd609468f5967185a1aceb869c881231de4dfdcd090b8f09483f137cc"} Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.406994 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" path="/var/lib/kubelet/pods/d4790c11-3203-4f22-958f-a67c1242beb0/volumes" Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.446125 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerStarted","Data":"15e06db9315f6411797d0f9aa67f1b07e5684d9702a9fc4442dbbc0814ef18e3"} Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.446268 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-api" containerID="cri-o://92ea8495e9dfbb89165f77881cd9b84fab88074bc3bb11952d375d651c22c915" gracePeriod=30 Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.446687 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-listener" containerID="cri-o://15e06db9315f6411797d0f9aa67f1b07e5684d9702a9fc4442dbbc0814ef18e3" gracePeriod=30 Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.446739 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-notifier" containerID="cri-o://e4d81d5cd609468f5967185a1aceb869c881231de4dfdcd090b8f09483f137cc" gracePeriod=30 Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.446773 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-evaluator" containerID="cri-o://42609fe8611cecf81531dc42f446b934dedc3a17199479496c8429f2b36967fa" gracePeriod=30 Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.449231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerStarted","Data":"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0"} Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.449256 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerStarted","Data":"40f72fb3e6c0e5affb24c292b9e3430449f93e14ef08ad4d32955ae47cb5c29a"} Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.480195 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:31 crc kubenswrapper[5136]: I0320 08:59:31.459672 5136 generic.go:334] "Generic (PLEG): container finished" podID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerID="42609fe8611cecf81531dc42f446b934dedc3a17199479496c8429f2b36967fa" exitCode=0 Mar 20 08:59:31 crc kubenswrapper[5136]: I0320 08:59:31.460221 5136 generic.go:334] "Generic (PLEG): container finished" podID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerID="92ea8495e9dfbb89165f77881cd9b84fab88074bc3bb11952d375d651c22c915" exitCode=0 Mar 20 08:59:31 crc kubenswrapper[5136]: I0320 08:59:31.459767 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerDied","Data":"42609fe8611cecf81531dc42f446b934dedc3a17199479496c8429f2b36967fa"} Mar 20 08:59:31 crc kubenswrapper[5136]: I0320 08:59:31.460267 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerDied","Data":"92ea8495e9dfbb89165f77881cd9b84fab88074bc3bb11952d375d651c22c915"} Mar 20 08:59:31 crc kubenswrapper[5136]: I0320 08:59:31.463230 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerStarted","Data":"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768"} Mar 20 08:59:32 crc kubenswrapper[5136]: I0320 08:59:32.471914 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerStarted","Data":"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475"} Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.489963 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerStarted","Data":"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833"} Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.490294 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-central-agent" containerID="cri-o://229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0" gracePeriod=30 Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.490571 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.490639 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="proxy-httpd" containerID="cri-o://58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833" gracePeriod=30 Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.490779 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-notification-agent" containerID="cri-o://335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768" gracePeriod=30 Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.490890 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="sg-core" containerID="cri-o://c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475" gracePeriod=30 Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.517432 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7189996130000003 podStartE2EDuration="6.517409344s" podCreationTimestamp="2026-03-20 08:59:28 +0000 UTC" firstStartedPulling="2026-03-20 08:59:29.427174068 +0000 UTC m=+7801.686485219" lastFinishedPulling="2026-03-20 08:59:33.225583799 +0000 UTC m=+7805.484894950" observedRunningTime="2026-03-20 08:59:34.511678086 +0000 UTC m=+7806.770989247" watchObservedRunningTime="2026-03-20 08:59:34.517409344 +0000 UTC m=+7806.776720505" Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.527793 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=5.5670325080000005 podStartE2EDuration="10.527768034s" podCreationTimestamp="2026-03-20 08:59:24 +0000 UTC" firstStartedPulling="2026-03-20 08:59:25.09395475 +0000 UTC m=+7797.353265901" lastFinishedPulling="2026-03-20 08:59:30.054690276 +0000 UTC m=+7802.314001427" observedRunningTime="2026-03-20 08:59:30.49567888 +0000 UTC m=+7802.754990021" watchObservedRunningTime="2026-03-20 08:59:34.527768034 +0000 UTC m=+7806.787079195" Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502370 5136 generic.go:334] "Generic (PLEG): container finished" podID="a5ddc272-9064-4e30-ba27-01b92989b459" containerID="58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833" exitCode=0 Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502701 5136 generic.go:334] "Generic (PLEG): container finished" podID="a5ddc272-9064-4e30-ba27-01b92989b459" containerID="c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475" exitCode=2 Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502712 5136 generic.go:334] "Generic (PLEG): container finished" podID="a5ddc272-9064-4e30-ba27-01b92989b459" containerID="335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768" exitCode=0 Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502416 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerDied","Data":"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833"} Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502748 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerDied","Data":"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475"} Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502762 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerDied","Data":"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768"} Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.166922 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327364 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327434 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327481 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327578 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327632 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327667 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.328164 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.328374 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.328904 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.328993 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2srg\" (UniqueName: \"kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.330298 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.330330 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.333314 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts" (OuterVolumeSpecName: "scripts") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.336007 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg" (OuterVolumeSpecName: "kube-api-access-f2srg") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "kube-api-access-f2srg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.356040 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.390940 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.428857 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.432143 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.432215 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2srg\" (UniqueName: \"kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.432235 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.432246 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.432287 5136 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.441966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data" (OuterVolumeSpecName: "config-data") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.512982 5136 generic.go:334] "Generic (PLEG): container finished" podID="a5ddc272-9064-4e30-ba27-01b92989b459" containerID="229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0" exitCode=0 Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.513037 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerDied","Data":"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0"} Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.513065 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.513092 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerDied","Data":"40f72fb3e6c0e5affb24c292b9e3430449f93e14ef08ad4d32955ae47cb5c29a"} Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.513115 5136 scope.go:117] "RemoveContainer" containerID="58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.534387 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.545649 5136 scope.go:117] "RemoveContainer" containerID="c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.555772 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.567390 5136 scope.go:117] "RemoveContainer" containerID="335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.583421 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.602871 5136 scope.go:117] "RemoveContainer" containerID="229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.603170 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.603655 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="proxy-httpd" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.603728 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="proxy-httpd" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.603846 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-notification-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.603928 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-notification-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.603990 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="sg-core" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604041 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="sg-core" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.604112 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-central-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604171 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-central-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604401 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="proxy-httpd" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604474 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-central-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604531 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-notification-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604613 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="sg-core" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.606655 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.609077 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.609499 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.609753 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.618711 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.638747 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj2tp\" (UniqueName: \"kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.638868 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639026 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639058 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639130 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639162 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639244 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639345 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.659520 5136 scope.go:117] "RemoveContainer" containerID="58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.661019 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833\": container with ID starting with 58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833 not found: ID does not exist" containerID="58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.661169 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833"} err="failed to get container status \"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833\": rpc error: code = NotFound desc = could not find container \"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833\": container with ID starting with 58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833 not found: ID does not exist" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.661285 5136 scope.go:117] "RemoveContainer" containerID="c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.663310 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475\": container with ID starting with c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475 not found: ID does not exist" containerID="c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.663413 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475"} err="failed to get container status \"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475\": rpc error: code = NotFound desc = could not find container \"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475\": container with ID starting with c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475 not found: ID does not exist" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.663504 5136 scope.go:117] "RemoveContainer" containerID="335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.664653 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768\": container with ID starting with 335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768 not found: ID does not exist" containerID="335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.664705 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768"} err="failed to get container status \"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768\": rpc error: code = NotFound desc = could not find container \"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768\": container with ID starting with 335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768 not found: ID does not exist" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.664735 5136 scope.go:117] "RemoveContainer" containerID="229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.665190 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0\": container with ID starting with 229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0 not found: ID does not exist" containerID="229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.665225 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0"} err="failed to get container status \"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0\": rpc error: code = NotFound desc = could not find container \"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0\": container with ID starting with 229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0 not found: ID does not exist" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.740990 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741038 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741117 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741165 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741211 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj2tp\" (UniqueName: \"kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741342 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.742600 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.742773 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.745776 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.745859 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.746126 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.746229 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.749502 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.758453 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj2tp\" (UniqueName: \"kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.933474 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.370738 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.373582 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.385950 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:37 crc kubenswrapper[5136]: W0320 08:59:37.440068 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dbff142_083b_40b7_a0d7_3f17fa9810e3.slice/crio-5dafeee7b076cbe9bc559d7bf9a4dfd78115dffe5bc845019a4034a5c5f12b09 WatchSource:0}: Error finding container 5dafeee7b076cbe9bc559d7bf9a4dfd78115dffe5bc845019a4034a5c5f12b09: Status 404 returned error can't find the container with id 5dafeee7b076cbe9bc559d7bf9a4dfd78115dffe5bc845019a4034a5c5f12b09 Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.442320 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.534873 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerStarted","Data":"5dafeee7b076cbe9bc559d7bf9a4dfd78115dffe5bc845019a4034a5c5f12b09"} Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.561479 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6tcz\" (UniqueName: \"kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.561582 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.561976 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.664827 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.665052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6tcz\" (UniqueName: \"kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.665097 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.665410 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.665643 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.682167 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6tcz\" (UniqueName: \"kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.703148 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.205343 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:38 crc kubenswrapper[5136]: W0320 08:59:38.221337 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87f36e33_74ba_42e9_82e7_229e00db3895.slice/crio-2810ab930e57f507f42644732b5541dac7657e6fae6a56d60c682ec4715b179f WatchSource:0}: Error finding container 2810ab930e57f507f42644732b5541dac7657e6fae6a56d60c682ec4715b179f: Status 404 returned error can't find the container with id 2810ab930e57f507f42644732b5541dac7657e6fae6a56d60c682ec4715b179f Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.418089 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" path="/var/lib/kubelet/pods/a5ddc272-9064-4e30-ba27-01b92989b459/volumes" Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.544285 5136 generic.go:334] "Generic (PLEG): container finished" podID="87f36e33-74ba-42e9-82e7-229e00db3895" containerID="e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3" exitCode=0 Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.544358 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerDied","Data":"e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3"} Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.544390 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerStarted","Data":"2810ab930e57f507f42644732b5541dac7657e6fae6a56d60c682ec4715b179f"} Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.547956 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerStarted","Data":"d9e0d70b1ab5d2268043ec21cc179228d03e79f0c594fbabbe78f8b02d15cad9"} Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.548019 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerStarted","Data":"bdb39aa61401fd83441079957dca9d830d4f38dae3c0e327bcfc878794649036"} Mar 20 08:59:39 crc kubenswrapper[5136]: I0320 08:59:39.559378 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerStarted","Data":"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe"} Mar 20 08:59:39 crc kubenswrapper[5136]: I0320 08:59:39.561894 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerStarted","Data":"22214e3addd0a5c3b338ef171790692102833676b69426afd997304cb1243d2d"} Mar 20 08:59:40 crc kubenswrapper[5136]: I0320 08:59:40.401056 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:59:40 crc kubenswrapper[5136]: E0320 08:59:40.402002 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.117460 5136 scope.go:117] "RemoveContainer" containerID="997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.165874 5136 scope.go:117] "RemoveContainer" containerID="429dc8682418b5dd369adad65e15254f5660e5ac47e728d920acf519227996a5" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.192190 5136 scope.go:117] "RemoveContainer" containerID="f82be34aae682e8c29705602a5f53ed7c575b686a407aea8e3a4986b123ff8de" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.218081 5136 scope.go:117] "RemoveContainer" containerID="44fe55ac22ffb29c918ccdbeaa595a57a885f7bfeba75321b48d4b09c0926e19" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.592850 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerStarted","Data":"65f0fcd421f6ec548cf9de08170a35a1209f40b76c2fe57dae5b8d4eb78f76fb"} Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.593108 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.598577 5136 generic.go:334] "Generic (PLEG): container finished" podID="87f36e33-74ba-42e9-82e7-229e00db3895" containerID="290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe" exitCode=0 Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.598635 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerDied","Data":"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe"} Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.619600 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.216940152 podStartE2EDuration="5.619578939s" podCreationTimestamp="2026-03-20 08:59:36 +0000 UTC" firstStartedPulling="2026-03-20 08:59:37.442193836 +0000 UTC m=+7809.701504987" lastFinishedPulling="2026-03-20 08:59:40.844832613 +0000 UTC m=+7813.104143774" observedRunningTime="2026-03-20 08:59:41.614986587 +0000 UTC m=+7813.874297768" watchObservedRunningTime="2026-03-20 08:59:41.619578939 +0000 UTC m=+7813.878890090" Mar 20 08:59:42 crc kubenswrapper[5136]: I0320 08:59:42.623324 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerStarted","Data":"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547"} Mar 20 08:59:42 crc kubenswrapper[5136]: I0320 08:59:42.647952 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c75wp" podStartSLOduration=2.1507553440000002 podStartE2EDuration="5.647927897s" podCreationTimestamp="2026-03-20 08:59:37 +0000 UTC" firstStartedPulling="2026-03-20 08:59:38.547407624 +0000 UTC m=+7810.806718775" lastFinishedPulling="2026-03-20 08:59:42.044580177 +0000 UTC m=+7814.303891328" observedRunningTime="2026-03-20 08:59:42.645014007 +0000 UTC m=+7814.904325198" watchObservedRunningTime="2026-03-20 08:59:42.647927897 +0000 UTC m=+7814.907239048" Mar 20 08:59:47 crc kubenswrapper[5136]: I0320 08:59:47.702883 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:47 crc kubenswrapper[5136]: I0320 08:59:47.703200 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:47 crc kubenswrapper[5136]: I0320 08:59:47.755657 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:48 crc kubenswrapper[5136]: I0320 08:59:48.730235 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:48 crc kubenswrapper[5136]: I0320 08:59:48.788233 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:50 crc kubenswrapper[5136]: I0320 08:59:50.708102 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c75wp" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="registry-server" containerID="cri-o://1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547" gracePeriod=2 Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.205391 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.265053 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities\") pod \"87f36e33-74ba-42e9-82e7-229e00db3895\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.265132 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6tcz\" (UniqueName: \"kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz\") pod \"87f36e33-74ba-42e9-82e7-229e00db3895\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.266196 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities" (OuterVolumeSpecName: "utilities") pod "87f36e33-74ba-42e9-82e7-229e00db3895" (UID: "87f36e33-74ba-42e9-82e7-229e00db3895"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.271627 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz" (OuterVolumeSpecName: "kube-api-access-t6tcz") pod "87f36e33-74ba-42e9-82e7-229e00db3895" (UID: "87f36e33-74ba-42e9-82e7-229e00db3895"). InnerVolumeSpecName "kube-api-access-t6tcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.366998 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content\") pod \"87f36e33-74ba-42e9-82e7-229e00db3895\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.367405 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.367428 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6tcz\" (UniqueName: \"kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.423495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f36e33-74ba-42e9-82e7-229e00db3895" (UID: "87f36e33-74ba-42e9-82e7-229e00db3895"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.469503 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.717093 5136 generic.go:334] "Generic (PLEG): container finished" podID="87f36e33-74ba-42e9-82e7-229e00db3895" containerID="1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547" exitCode=0 Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.717164 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.717169 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerDied","Data":"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547"} Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.717549 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerDied","Data":"2810ab930e57f507f42644732b5541dac7657e6fae6a56d60c682ec4715b179f"} Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.717567 5136 scope.go:117] "RemoveContainer" containerID="1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.753651 5136 scope.go:117] "RemoveContainer" containerID="290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.758074 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.771255 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.797884 5136 scope.go:117] "RemoveContainer" containerID="e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.840783 5136 scope.go:117] "RemoveContainer" containerID="1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547" Mar 20 08:59:51 crc kubenswrapper[5136]: E0320 08:59:51.841857 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547\": container with ID starting with 1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547 not found: ID does not exist" containerID="1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.841920 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547"} err="failed to get container status \"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547\": rpc error: code = NotFound desc = could not find container \"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547\": container with ID starting with 1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547 not found: ID does not exist" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.841954 5136 scope.go:117] "RemoveContainer" containerID="290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe" Mar 20 08:59:51 crc kubenswrapper[5136]: E0320 08:59:51.842318 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe\": container with ID starting with 290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe not found: ID does not exist" containerID="290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.842342 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe"} err="failed to get container status \"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe\": rpc error: code = NotFound desc = could not find container \"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe\": container with ID starting with 290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe not found: ID does not exist" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.842360 5136 scope.go:117] "RemoveContainer" containerID="e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3" Mar 20 08:59:51 crc kubenswrapper[5136]: E0320 08:59:51.842778 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3\": container with ID starting with e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3 not found: ID does not exist" containerID="e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.842855 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3"} err="failed to get container status \"e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3\": rpc error: code = NotFound desc = could not find container \"e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3\": container with ID starting with e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3 not found: ID does not exist" Mar 20 08:59:52 crc kubenswrapper[5136]: I0320 08:59:52.407240 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" path="/var/lib/kubelet/pods/87f36e33-74ba-42e9-82e7-229e00db3895/volumes" Mar 20 08:59:53 crc kubenswrapper[5136]: I0320 08:59:53.396925 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:59:53 crc kubenswrapper[5136]: E0320 08:59:53.397471 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.173215 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566620-sh7c8"] Mar 20 09:00:00 crc kubenswrapper[5136]: E0320 09:00:00.174641 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="extract-content" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.174661 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="extract-content" Mar 20 09:00:00 crc kubenswrapper[5136]: E0320 09:00:00.174695 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="extract-utilities" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.174705 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="extract-utilities" Mar 20 09:00:00 crc kubenswrapper[5136]: E0320 09:00:00.174722 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="registry-server" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.174729 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="registry-server" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.174993 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="registry-server" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.175774 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.179157 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.179415 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.180707 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.187742 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc"] Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.189009 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.193737 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.194294 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.201824 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566620-sh7c8"] Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.236901 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc"] Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.287847 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw2vk\" (UniqueName: \"kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk\") pod \"auto-csr-approver-29566620-sh7c8\" (UID: \"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91\") " pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.288192 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kwcf\" (UniqueName: \"kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.288282 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.288377 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.389892 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.390034 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.390306 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw2vk\" (UniqueName: \"kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk\") pod \"auto-csr-approver-29566620-sh7c8\" (UID: \"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91\") " pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.390366 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kwcf\" (UniqueName: \"kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.391146 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.402915 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.406387 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw2vk\" (UniqueName: \"kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk\") pod \"auto-csr-approver-29566620-sh7c8\" (UID: \"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91\") " pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.418934 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kwcf\" (UniqueName: \"kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.510066 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.524749 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.828019 5136 generic.go:334] "Generic (PLEG): container finished" podID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerID="15e06db9315f6411797d0f9aa67f1b07e5684d9702a9fc4442dbbc0814ef18e3" exitCode=137 Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.828284 5136 generic.go:334] "Generic (PLEG): container finished" podID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerID="e4d81d5cd609468f5967185a1aceb869c881231de4dfdcd090b8f09483f137cc" exitCode=137 Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.828305 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerDied","Data":"15e06db9315f6411797d0f9aa67f1b07e5684d9702a9fc4442dbbc0814ef18e3"} Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.828328 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerDied","Data":"e4d81d5cd609468f5967185a1aceb869c881231de4dfdcd090b8f09483f137cc"} Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.927456 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.003884 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data\") pod \"b2e63488-a737-4c5d-8ec1-12df36065d97\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.004010 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpx8b\" (UniqueName: \"kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b\") pod \"b2e63488-a737-4c5d-8ec1-12df36065d97\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.004159 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts\") pod \"b2e63488-a737-4c5d-8ec1-12df36065d97\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.004191 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle\") pod \"b2e63488-a737-4c5d-8ec1-12df36065d97\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.010789 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b" (OuterVolumeSpecName: "kube-api-access-lpx8b") pod "b2e63488-a737-4c5d-8ec1-12df36065d97" (UID: "b2e63488-a737-4c5d-8ec1-12df36065d97"). InnerVolumeSpecName "kube-api-access-lpx8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.011953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts" (OuterVolumeSpecName: "scripts") pod "b2e63488-a737-4c5d-8ec1-12df36065d97" (UID: "b2e63488-a737-4c5d-8ec1-12df36065d97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.059879 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc"] Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.106504 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpx8b\" (UniqueName: \"kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.106810 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.139968 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2e63488-a737-4c5d-8ec1-12df36065d97" (UID: "b2e63488-a737-4c5d-8ec1-12df36065d97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.176329 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566620-sh7c8"] Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.185615 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.188160 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data" (OuterVolumeSpecName: "config-data") pod "b2e63488-a737-4c5d-8ec1-12df36065d97" (UID: "b2e63488-a737-4c5d-8ec1-12df36065d97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.208483 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.208514 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.840421 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" event={"ID":"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91","Type":"ContainerStarted","Data":"6d284284512c2c1cff2dd9339264f2f06fd3e69ba0decda4bba7cddddc09cd1e"} Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.843851 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerDied","Data":"d8bdfeca9bd1597fa3d2bc3b892eb75e23fce5575693634908f1e50575aa3005"} Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.843888 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.843917 5136 scope.go:117] "RemoveContainer" containerID="15e06db9315f6411797d0f9aa67f1b07e5684d9702a9fc4442dbbc0814ef18e3" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.845976 5136 generic.go:334] "Generic (PLEG): container finished" podID="927cb714-a185-49ad-a263-0d750b85ca34" containerID="cef340038d5981403f73c3d33f53d35230ad92fb789bb9c08b085b98f0a9ee81" exitCode=0 Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.846037 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" event={"ID":"927cb714-a185-49ad-a263-0d750b85ca34","Type":"ContainerDied","Data":"cef340038d5981403f73c3d33f53d35230ad92fb789bb9c08b085b98f0a9ee81"} Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.846074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" event={"ID":"927cb714-a185-49ad-a263-0d750b85ca34","Type":"ContainerStarted","Data":"22e95ea38d66d8161e5b6e963b72527b10914b75e6e8d6825acfbe4f57546296"} Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.864580 5136 scope.go:117] "RemoveContainer" containerID="e4d81d5cd609468f5967185a1aceb869c881231de4dfdcd090b8f09483f137cc" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.887756 5136 scope.go:117] "RemoveContainer" containerID="42609fe8611cecf81531dc42f446b934dedc3a17199479496c8429f2b36967fa" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.890774 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.902424 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.911638 5136 scope.go:117] "RemoveContainer" containerID="92ea8495e9dfbb89165f77881cd9b84fab88074bc3bb11952d375d651c22c915" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.921530 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 20 09:00:01 crc kubenswrapper[5136]: E0320 09:00:01.922047 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-notifier" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922068 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-notifier" Mar 20 09:00:01 crc kubenswrapper[5136]: E0320 09:00:01.922086 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-api" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922093 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-api" Mar 20 09:00:01 crc kubenswrapper[5136]: E0320 09:00:01.922111 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-listener" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922122 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-listener" Mar 20 09:00:01 crc kubenswrapper[5136]: E0320 09:00:01.922131 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-evaluator" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922137 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-evaluator" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922312 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-evaluator" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922331 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-api" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922340 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-notifier" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922350 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-listener" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.924376 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.929305 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.929532 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.929804 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.929889 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.932271 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4k8x9" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.948501 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.024914 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.025121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.025282 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.025372 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.025413 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.025447 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.127411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.127474 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.127507 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.127651 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.128293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.128381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.131728 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.132537 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.133388 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.135426 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.135924 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.150902 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.242233 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.416438 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" path="/var/lib/kubelet/pods/b2e63488-a737-4c5d-8ec1-12df36065d97/volumes" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.693116 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.856298 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerStarted","Data":"c5f3a5b62a724af9b3292dfbea60cc84cb5ca65e111a8f8018f79664063a08d4"} Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.227190 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.359873 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume\") pod \"927cb714-a185-49ad-a263-0d750b85ca34\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.359952 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume\") pod \"927cb714-a185-49ad-a263-0d750b85ca34\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.360037 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kwcf\" (UniqueName: \"kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf\") pod \"927cb714-a185-49ad-a263-0d750b85ca34\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.360870 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume" (OuterVolumeSpecName: "config-volume") pod "927cb714-a185-49ad-a263-0d750b85ca34" (UID: "927cb714-a185-49ad-a263-0d750b85ca34"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.365606 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf" (OuterVolumeSpecName: "kube-api-access-8kwcf") pod "927cb714-a185-49ad-a263-0d750b85ca34" (UID: "927cb714-a185-49ad-a263-0d750b85ca34"). InnerVolumeSpecName "kube-api-access-8kwcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.366475 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "927cb714-a185-49ad-a263-0d750b85ca34" (UID: "927cb714-a185-49ad-a263-0d750b85ca34"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.462572 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.462610 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.462622 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kwcf\" (UniqueName: \"kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.866317 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" event={"ID":"927cb714-a185-49ad-a263-0d750b85ca34","Type":"ContainerDied","Data":"22e95ea38d66d8161e5b6e963b72527b10914b75e6e8d6825acfbe4f57546296"} Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.866928 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e95ea38d66d8161e5b6e963b72527b10914b75e6e8d6825acfbe4f57546296" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.866594 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.868253 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerStarted","Data":"7cef857842ff9d2b9ec6fba6fced2a4a47da1ca6826d17831d4379411662258d"} Mar 20 09:00:04 crc kubenswrapper[5136]: I0320 09:00:04.299529 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms"] Mar 20 09:00:04 crc kubenswrapper[5136]: I0320 09:00:04.308014 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms"] Mar 20 09:00:04 crc kubenswrapper[5136]: I0320 09:00:04.412986 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02161682-1526-46e0-aaa6-d09c6758943c" path="/var/lib/kubelet/pods/02161682-1526-46e0-aaa6-d09c6758943c/volumes" Mar 20 09:00:04 crc kubenswrapper[5136]: I0320 09:00:04.880470 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerStarted","Data":"dcd83e13ab91d1b3d212755c407d51e55f888a96ac55ba1a109bfc09166fa35d"} Mar 20 09:00:04 crc kubenswrapper[5136]: I0320 09:00:04.880704 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerStarted","Data":"2ead91f10403f4d804be86964d57e08ade2602b5155572d57113b707313fe0a4"} Mar 20 09:00:05 crc kubenswrapper[5136]: I0320 09:00:05.893490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerStarted","Data":"7e50b645c14d1546ec6ea5f4cc398a09d244fd42ce33f42ece0b4ffe6904f5e2"} Mar 20 09:00:05 crc kubenswrapper[5136]: I0320 09:00:05.895329 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" event={"ID":"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91","Type":"ContainerStarted","Data":"d46abf622d038618ca2e56c8ba50c8df50e7f199364c722c3c72d53324ea811a"} Mar 20 09:00:05 crc kubenswrapper[5136]: I0320 09:00:05.918614 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.951500733 podStartE2EDuration="4.918590414s" podCreationTimestamp="2026-03-20 09:00:01 +0000 UTC" firstStartedPulling="2026-03-20 09:00:02.698040585 +0000 UTC m=+7834.957351726" lastFinishedPulling="2026-03-20 09:00:04.665130256 +0000 UTC m=+7836.924441407" observedRunningTime="2026-03-20 09:00:05.917016235 +0000 UTC m=+7838.176327386" watchObservedRunningTime="2026-03-20 09:00:05.918590414 +0000 UTC m=+7838.177901565" Mar 20 09:00:05 crc kubenswrapper[5136]: I0320 09:00:05.933297 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" podStartSLOduration=1.986246926 podStartE2EDuration="5.933281138s" podCreationTimestamp="2026-03-20 09:00:00 +0000 UTC" firstStartedPulling="2026-03-20 09:00:01.185254358 +0000 UTC m=+7833.444565509" lastFinishedPulling="2026-03-20 09:00:05.13228857 +0000 UTC m=+7837.391599721" observedRunningTime="2026-03-20 09:00:05.930145052 +0000 UTC m=+7838.189456203" watchObservedRunningTime="2026-03-20 09:00:05.933281138 +0000 UTC m=+7838.192592289" Mar 20 09:00:06 crc kubenswrapper[5136]: I0320 09:00:06.908405 5136 generic.go:334] "Generic (PLEG): container finished" podID="fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" containerID="d46abf622d038618ca2e56c8ba50c8df50e7f199364c722c3c72d53324ea811a" exitCode=0 Mar 20 09:00:06 crc kubenswrapper[5136]: I0320 09:00:06.908494 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" event={"ID":"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91","Type":"ContainerDied","Data":"d46abf622d038618ca2e56c8ba50c8df50e7f199364c722c3c72d53324ea811a"} Mar 20 09:00:06 crc kubenswrapper[5136]: I0320 09:00:06.953288 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.043631 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tr2s5"] Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.054033 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6249-account-create-update-mrh6x"] Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.065064 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tr2s5"] Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.073376 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6249-account-create-update-mrh6x"] Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.252349 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.368454 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw2vk\" (UniqueName: \"kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk\") pod \"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91\" (UID: \"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91\") " Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.376122 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk" (OuterVolumeSpecName: "kube-api-access-mw2vk") pod "fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" (UID: "fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91"). InnerVolumeSpecName "kube-api-access-mw2vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.405087 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 09:00:08 crc kubenswrapper[5136]: E0320 09:00:08.405740 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.411164 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="570ecd59-555d-4f55-aed1-6fe547da30b1" path="/var/lib/kubelet/pods/570ecd59-555d-4f55-aed1-6fe547da30b1/volumes" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.411969 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd07221a-a5f4-4a47-a7bf-354b0d432b27" path="/var/lib/kubelet/pods/fd07221a-a5f4-4a47-a7bf-354b0d432b27/volumes" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.471215 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw2vk\" (UniqueName: \"kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.929445 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" event={"ID":"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91","Type":"ContainerDied","Data":"6d284284512c2c1cff2dd9339264f2f06fd3e69ba0decda4bba7cddddc09cd1e"} Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.929467 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.929487 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d284284512c2c1cff2dd9339264f2f06fd3e69ba0decda4bba7cddddc09cd1e" Mar 20 09:00:09 crc kubenswrapper[5136]: I0320 09:00:09.307615 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566614-wvfxr"] Mar 20 09:00:09 crc kubenswrapper[5136]: I0320 09:00:09.317589 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566614-wvfxr"] Mar 20 09:00:10 crc kubenswrapper[5136]: I0320 09:00:10.406772 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="746f2ae5-dabf-431a-b344-011a75049862" path="/var/lib/kubelet/pods/746f2ae5-dabf-431a-b344-011a75049862/volumes" Mar 20 09:00:20 crc kubenswrapper[5136]: I0320 09:00:20.397849 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 09:00:21 crc kubenswrapper[5136]: I0320 09:00:21.066177 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b"} Mar 20 09:00:32 crc kubenswrapper[5136]: I0320 09:00:32.029207 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dlmp5"] Mar 20 09:00:32 crc kubenswrapper[5136]: I0320 09:00:32.040759 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dlmp5"] Mar 20 09:00:32 crc kubenswrapper[5136]: I0320 09:00:32.408488 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0757343-a168-444b-ab9f-eb32dc3e416a" path="/var/lib/kubelet/pods/d0757343-a168-444b-ab9f-eb32dc3e416a/volumes" Mar 20 09:00:41 crc kubenswrapper[5136]: I0320 09:00:41.421116 5136 scope.go:117] "RemoveContainer" containerID="39d097d4e3a8458b775ea906bb0dd550fdd83b3369518a3cd12d9c26c24a8a02" Mar 20 09:00:41 crc kubenswrapper[5136]: I0320 09:00:41.467995 5136 scope.go:117] "RemoveContainer" containerID="d7c966f182c94b6eabaca701ac9e2f115b1d66510a14ffb108fa112317b9c2d8" Mar 20 09:00:41 crc kubenswrapper[5136]: I0320 09:00:41.537903 5136 scope.go:117] "RemoveContainer" containerID="ea4b393a20ea1ece97f36d015f4602f5e94b839ad564485cfca078d956bd0138" Mar 20 09:00:41 crc kubenswrapper[5136]: I0320 09:00:41.629710 5136 scope.go:117] "RemoveContainer" containerID="62b91ae766226b0da7fe114136196e5dea194bad90be0b48d6f9d8c6e4102b25" Mar 20 09:00:41 crc kubenswrapper[5136]: I0320 09:00:41.648446 5136 scope.go:117] "RemoveContainer" containerID="b8341630a66939232813fa3ca2eab063f076fc3ac4ee1803ba6693cd8bb7a98d" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.152140 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29566621-n7g7j"] Mar 20 09:01:00 crc kubenswrapper[5136]: E0320 09:01:00.159955 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927cb714-a185-49ad-a263-0d750b85ca34" containerName="collect-profiles" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.159983 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="927cb714-a185-49ad-a263-0d750b85ca34" containerName="collect-profiles" Mar 20 09:01:00 crc kubenswrapper[5136]: E0320 09:01:00.160014 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" containerName="oc" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.160021 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" containerName="oc" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.160252 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" containerName="oc" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.160274 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="927cb714-a185-49ad-a263-0d750b85ca34" containerName="collect-profiles" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.161075 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.169889 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29566621-n7g7j"] Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.262752 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.262833 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx99t\" (UniqueName: \"kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.262913 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.262946 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.365460 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.365576 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx99t\" (UniqueName: \"kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.365657 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.365698 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.376864 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.383829 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.388568 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.397430 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx99t\" (UniqueName: \"kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.496784 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.974070 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29566621-n7g7j"] Mar 20 09:01:01 crc kubenswrapper[5136]: I0320 09:01:01.426171 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29566621-n7g7j" event={"ID":"8494da27-4688-4c23-b4bd-77a8cac9ae31","Type":"ContainerStarted","Data":"e9acc33cb6ef33f971afc8e98aee5abda02ae0be42cbb3e0b4beb36ffafb1e4d"} Mar 20 09:01:01 crc kubenswrapper[5136]: I0320 09:01:01.426516 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29566621-n7g7j" event={"ID":"8494da27-4688-4c23-b4bd-77a8cac9ae31","Type":"ContainerStarted","Data":"94a7dd9b06f4da0a623eb78e6e371bb29bfd8600d2b285db8fb2d4c3213a82d4"} Mar 20 09:01:01 crc kubenswrapper[5136]: I0320 09:01:01.452839 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29566621-n7g7j" podStartSLOduration=1.452805858 podStartE2EDuration="1.452805858s" podCreationTimestamp="2026-03-20 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 09:01:01.445479992 +0000 UTC m=+7893.704791153" watchObservedRunningTime="2026-03-20 09:01:01.452805858 +0000 UTC m=+7893.712116999" Mar 20 09:01:03 crc kubenswrapper[5136]: I0320 09:01:03.056183 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-85wqc"] Mar 20 09:01:03 crc kubenswrapper[5136]: I0320 09:01:03.068280 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-85wqc"] Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.043038 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e4e3-account-create-update-htnkq"] Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.053142 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e4e3-account-create-update-htnkq"] Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.408963 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2204982c-c8aa-4b18-a455-71915264f644" path="/var/lib/kubelet/pods/2204982c-c8aa-4b18-a455-71915264f644/volumes" Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.409491 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b48a8f95-9236-458f-a8ab-fb15f6878172" path="/var/lib/kubelet/pods/b48a8f95-9236-458f-a8ab-fb15f6878172/volumes" Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.453680 5136 generic.go:334] "Generic (PLEG): container finished" podID="8494da27-4688-4c23-b4bd-77a8cac9ae31" containerID="e9acc33cb6ef33f971afc8e98aee5abda02ae0be42cbb3e0b4beb36ffafb1e4d" exitCode=0 Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.453719 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29566621-n7g7j" event={"ID":"8494da27-4688-4c23-b4bd-77a8cac9ae31","Type":"ContainerDied","Data":"e9acc33cb6ef33f971afc8e98aee5abda02ae0be42cbb3e0b4beb36ffafb1e4d"} Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.842129 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.978826 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx99t\" (UniqueName: \"kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t\") pod \"8494da27-4688-4c23-b4bd-77a8cac9ae31\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.978876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data\") pod \"8494da27-4688-4c23-b4bd-77a8cac9ae31\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.978959 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle\") pod \"8494da27-4688-4c23-b4bd-77a8cac9ae31\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.979072 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys\") pod \"8494da27-4688-4c23-b4bd-77a8cac9ae31\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.986065 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8494da27-4688-4c23-b4bd-77a8cac9ae31" (UID: "8494da27-4688-4c23-b4bd-77a8cac9ae31"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.987201 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t" (OuterVolumeSpecName: "kube-api-access-hx99t") pod "8494da27-4688-4c23-b4bd-77a8cac9ae31" (UID: "8494da27-4688-4c23-b4bd-77a8cac9ae31"). InnerVolumeSpecName "kube-api-access-hx99t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.008111 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8494da27-4688-4c23-b4bd-77a8cac9ae31" (UID: "8494da27-4688-4c23-b4bd-77a8cac9ae31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.029672 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data" (OuterVolumeSpecName: "config-data") pod "8494da27-4688-4c23-b4bd-77a8cac9ae31" (UID: "8494da27-4688-4c23-b4bd-77a8cac9ae31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.082021 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx99t\" (UniqueName: \"kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t\") on node \"crc\" DevicePath \"\"" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.082060 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.082070 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.082078 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.472267 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29566621-n7g7j" event={"ID":"8494da27-4688-4c23-b4bd-77a8cac9ae31","Type":"ContainerDied","Data":"94a7dd9b06f4da0a623eb78e6e371bb29bfd8600d2b285db8fb2d4c3213a82d4"} Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.472314 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94a7dd9b06f4da0a623eb78e6e371bb29bfd8600d2b285db8fb2d4c3213a82d4" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.472386 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:14 crc kubenswrapper[5136]: I0320 09:01:14.029684 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2d5zx"] Mar 20 09:01:14 crc kubenswrapper[5136]: I0320 09:01:14.038837 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2d5zx"] Mar 20 09:01:14 crc kubenswrapper[5136]: I0320 09:01:14.424050 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a2341fa-02fc-4b08-a2a4-2272078db5d9" path="/var/lib/kubelet/pods/6a2341fa-02fc-4b08-a2a4-2272078db5d9/volumes" Mar 20 09:01:41 crc kubenswrapper[5136]: I0320 09:01:41.847133 5136 scope.go:117] "RemoveContainer" containerID="ab11099fc5fbb9ac6e0c34feae9a40a9addc504e685cccc5bc8ac39fbfb3793c" Mar 20 09:01:41 crc kubenswrapper[5136]: I0320 09:01:41.881238 5136 scope.go:117] "RemoveContainer" containerID="f2ac24f272d6a9df55f1b17c9f403e8fc1875096d56818de2641768b249208a8" Mar 20 09:01:41 crc kubenswrapper[5136]: I0320 09:01:41.927798 5136 scope.go:117] "RemoveContainer" containerID="97846ae11696236889350b3c9161e329e32ca1f71469f4c6bc5cd1b32b64434b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.184115 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566622-tbs2b"] Mar 20 09:02:00 crc kubenswrapper[5136]: E0320 09:02:00.186075 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8494da27-4688-4c23-b4bd-77a8cac9ae31" containerName="keystone-cron" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.186093 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8494da27-4688-4c23-b4bd-77a8cac9ae31" containerName="keystone-cron" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.186403 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8494da27-4688-4c23-b4bd-77a8cac9ae31" containerName="keystone-cron" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.187375 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.193804 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.194033 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.194061 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.221687 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566622-tbs2b"] Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.306918 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqxfn\" (UniqueName: \"kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn\") pod \"auto-csr-approver-29566622-tbs2b\" (UID: \"eeb9dd63-3112-441b-961e-b61a752527d8\") " pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.409204 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqxfn\" (UniqueName: \"kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn\") pod \"auto-csr-approver-29566622-tbs2b\" (UID: \"eeb9dd63-3112-441b-961e-b61a752527d8\") " pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.429869 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqxfn\" (UniqueName: \"kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn\") pod \"auto-csr-approver-29566622-tbs2b\" (UID: \"eeb9dd63-3112-441b-961e-b61a752527d8\") " pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.524404 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.989693 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566622-tbs2b"] Mar 20 09:02:02 crc kubenswrapper[5136]: I0320 09:02:02.047185 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" event={"ID":"eeb9dd63-3112-441b-961e-b61a752527d8","Type":"ContainerStarted","Data":"d159fe0e121c1ae70e920c78ea84976d7cd9710b606eec4bbf2fef223e223281"} Mar 20 09:02:03 crc kubenswrapper[5136]: I0320 09:02:03.058999 5136 generic.go:334] "Generic (PLEG): container finished" podID="eeb9dd63-3112-441b-961e-b61a752527d8" containerID="d110e85766974db9b00f23e4ec0b43a5d95e3bc9caa9f95ded6497351baab885" exitCode=0 Mar 20 09:02:03 crc kubenswrapper[5136]: I0320 09:02:03.059151 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" event={"ID":"eeb9dd63-3112-441b-961e-b61a752527d8","Type":"ContainerDied","Data":"d110e85766974db9b00f23e4ec0b43a5d95e3bc9caa9f95ded6497351baab885"} Mar 20 09:02:04 crc kubenswrapper[5136]: I0320 09:02:04.425400 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:04 crc kubenswrapper[5136]: I0320 09:02:04.595008 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqxfn\" (UniqueName: \"kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn\") pod \"eeb9dd63-3112-441b-961e-b61a752527d8\" (UID: \"eeb9dd63-3112-441b-961e-b61a752527d8\") " Mar 20 09:02:04 crc kubenswrapper[5136]: I0320 09:02:04.602228 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn" (OuterVolumeSpecName: "kube-api-access-zqxfn") pod "eeb9dd63-3112-441b-961e-b61a752527d8" (UID: "eeb9dd63-3112-441b-961e-b61a752527d8"). InnerVolumeSpecName "kube-api-access-zqxfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:04 crc kubenswrapper[5136]: I0320 09:02:04.697758 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqxfn\" (UniqueName: \"kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:05 crc kubenswrapper[5136]: I0320 09:02:05.077861 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" event={"ID":"eeb9dd63-3112-441b-961e-b61a752527d8","Type":"ContainerDied","Data":"d159fe0e121c1ae70e920c78ea84976d7cd9710b606eec4bbf2fef223e223281"} Mar 20 09:02:05 crc kubenswrapper[5136]: I0320 09:02:05.077908 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d159fe0e121c1ae70e920c78ea84976d7cd9710b606eec4bbf2fef223e223281" Mar 20 09:02:05 crc kubenswrapper[5136]: I0320 09:02:05.077951 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:05 crc kubenswrapper[5136]: I0320 09:02:05.508791 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566616-x4rw6"] Mar 20 09:02:05 crc kubenswrapper[5136]: I0320 09:02:05.520995 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566616-x4rw6"] Mar 20 09:02:06 crc kubenswrapper[5136]: I0320 09:02:06.408058 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c085dee-ef7e-47eb-93aa-6ecf4d45030c" path="/var/lib/kubelet/pods/0c085dee-ef7e-47eb-93aa-6ecf4d45030c/volumes" Mar 20 09:02:12 crc kubenswrapper[5136]: I0320 09:02:12.052392 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-plxtl"] Mar 20 09:02:12 crc kubenswrapper[5136]: I0320 09:02:12.069959 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-plxtl"] Mar 20 09:02:12 crc kubenswrapper[5136]: I0320 09:02:12.408360 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901ef065-f425-4ab7-b726-7d98704a58f8" path="/var/lib/kubelet/pods/901ef065-f425-4ab7-b726-7d98704a58f8/volumes" Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.055202 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-hkzk7"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.071763 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-m289f"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.080193 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-42278"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.088503 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-hkzk7"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.096588 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-6rchx"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.105693 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-bp9vg"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.114079 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-m289f"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.122001 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-6rchx"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.130146 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-42278"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.139598 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-bp9vg"] Mar 20 09:02:14 crc kubenswrapper[5136]: I0320 09:02:14.410913 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ddf395-2544-4ebe-b1e2-37321af6438e" path="/var/lib/kubelet/pods/60ddf395-2544-4ebe-b1e2-37321af6438e/volumes" Mar 20 09:02:14 crc kubenswrapper[5136]: I0320 09:02:14.411972 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d18b334-bb20-43b9-8322-c2e847b74703" path="/var/lib/kubelet/pods/7d18b334-bb20-43b9-8322-c2e847b74703/volumes" Mar 20 09:02:14 crc kubenswrapper[5136]: I0320 09:02:14.412683 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" path="/var/lib/kubelet/pods/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4/volumes" Mar 20 09:02:14 crc kubenswrapper[5136]: I0320 09:02:14.413369 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a725d785-3630-4adc-8417-15fceaecb250" path="/var/lib/kubelet/pods/a725d785-3630-4adc-8417-15fceaecb250/volumes" Mar 20 09:02:14 crc kubenswrapper[5136]: I0320 09:02:14.414633 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d573f1ae-c37f-487a-a059-5200647084d4" path="/var/lib/kubelet/pods/d573f1ae-c37f-487a-a059-5200647084d4/volumes" Mar 20 09:02:31 crc kubenswrapper[5136]: I0320 09:02:31.032226 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4m6bk"] Mar 20 09:02:31 crc kubenswrapper[5136]: I0320 09:02:31.043713 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4m6bk"] Mar 20 09:02:32 crc kubenswrapper[5136]: I0320 09:02:32.436286 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" path="/var/lib/kubelet/pods/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2/volumes" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.021389 5136 scope.go:117] "RemoveContainer" containerID="62941df7329d036b75c1f4c804a7915f68955eff793a634ef29d9182d34a9d9d" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.051040 5136 scope.go:117] "RemoveContainer" containerID="f77e438e3702b6de098fbd305814d9a4eb3df2f7161e741a8bd1bf247fc8becb" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.100942 5136 scope.go:117] "RemoveContainer" containerID="cee664fd2e2a84523a4d0f3b3405435f0b03db0425ff048065d98c5612016681" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.141247 5136 scope.go:117] "RemoveContainer" containerID="6de711a276196e50b3e83c58fdab583ad6f7407fccb722557535f82c9abd51a7" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.193039 5136 scope.go:117] "RemoveContainer" containerID="c40db219321d83dc20c2ac8a7868d46f48eda3e65baae9eafe26d50e1df17298" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.243379 5136 scope.go:117] "RemoveContainer" containerID="d695b9c2dbcf5b99f4e58724aa314335827d63b932809de7ba7a6c3af214ccca" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.302647 5136 scope.go:117] "RemoveContainer" containerID="fd6a9d8c42b14afc4c799021ad9e86afa50559ab2987d12455e53497c38a9c98" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.343778 5136 scope.go:117] "RemoveContainer" containerID="7b5e974893ba339d7d465bcbfaf4888d7a35fa993cb39d96df7bf3535b3e030c" Mar 20 09:02:45 crc kubenswrapper[5136]: I0320 09:02:45.047986 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bknwr"] Mar 20 09:02:45 crc kubenswrapper[5136]: I0320 09:02:45.057868 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bknwr"] Mar 20 09:02:45 crc kubenswrapper[5136]: I0320 09:02:45.822622 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:02:45 crc kubenswrapper[5136]: I0320 09:02:45.822709 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:02:46 crc kubenswrapper[5136]: I0320 09:02:46.043087 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mdczc"] Mar 20 09:02:46 crc kubenswrapper[5136]: I0320 09:02:46.057322 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mdczc"] Mar 20 09:02:46 crc kubenswrapper[5136]: I0320 09:02:46.411261 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0869b44d-0a1b-47ae-9836-8940a31bfcf3" path="/var/lib/kubelet/pods/0869b44d-0a1b-47ae-9836-8940a31bfcf3/volumes" Mar 20 09:02:46 crc kubenswrapper[5136]: I0320 09:02:46.428203 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10383e2-004c-458c-922b-dd13574f12ff" path="/var/lib/kubelet/pods/c10383e2-004c-458c-922b-dd13574f12ff/volumes" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.299530 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:49 crc kubenswrapper[5136]: E0320 09:02:49.300651 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb9dd63-3112-441b-961e-b61a752527d8" containerName="oc" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.300672 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb9dd63-3112-441b-961e-b61a752527d8" containerName="oc" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.300947 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb9dd63-3112-441b-961e-b61a752527d8" containerName="oc" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.301898 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.323933 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.329631 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.331311 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.335926 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.371526 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.382649 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6756\" (UniqueName: \"kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.382769 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c948k\" (UniqueName: \"kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.382838 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.382892 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.437495 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.486209 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6756\" (UniqueName: \"kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.486373 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c948k\" (UniqueName: \"kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.486457 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.486553 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.487575 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.489011 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.541410 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.543377 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.544605 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6756\" (UniqueName: \"kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.551158 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.648994 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.649245 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" containerName="openstackclient" containerID="cri-o://80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423" gracePeriod=2 Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.690554 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c948k\" (UniqueName: \"kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.697146 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475lv\" (UniqueName: \"kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.697254 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.708745 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.801696 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-475lv\" (UniqueName: \"kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.801806 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.802717 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.859085 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.879340 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-475lv\" (UniqueName: \"kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.897905 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.925501 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.002521 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:50 crc kubenswrapper[5136]: E0320 09:02:50.003169 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" containerName="openstackclient" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.003194 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" containerName="openstackclient" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.003429 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" containerName="openstackclient" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.004426 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.019226 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.020637 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.051019 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.096558 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.098626 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.109970 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcj6l\" (UniqueName: \"kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.110100 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.110551 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.147866 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.195099 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.196737 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.207305 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.211754 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.211833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.211913 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfzdq\" (UniqueName: \"kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.211959 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcj6l\" (UniqueName: \"kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.215292 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.238174 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.266442 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcj6l\" (UniqueName: \"kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.275512 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.277065 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.286501 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315671 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315721 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8b7\" (UniqueName: \"kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315765 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315822 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgnb2\" (UniqueName: \"kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315870 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315905 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfzdq\" (UniqueName: \"kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.316770 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.337313 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.392774 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.410567 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfzdq\" (UniqueName: \"kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.421004 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgnb2\" (UniqueName: \"kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.421077 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.421188 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.421216 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8b7\" (UniqueName: \"kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.422266 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.422710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.426985 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.457703 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8b7\" (UniqueName: \"kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.461954 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.490551 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgnb2\" (UniqueName: \"kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.513970 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.514272 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="ovn-northd" containerID="cri-o://491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" gracePeriod=30 Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.514642 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="openstack-network-exporter" containerID="cri-o://43d8180654ac711b0a6c655f92be552ad6bb0d4e4426596385b695958afa2b74" gracePeriod=30 Mar 20 09:02:50 crc kubenswrapper[5136]: E0320 09:02:50.529042 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:50 crc kubenswrapper[5136]: E0320 09:02:50.554146 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data podName:e2c9ab46-3143-4472-a606-cd75def78f41 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:51.054114448 +0000 UTC m=+8003.313425599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data") pod "rabbitmq-cell1-server-0" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41") : configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.550375 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.549917 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.556542 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.560962 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.633162 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.703362 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.736908 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.736973 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfq5r\" (UniqueName: \"kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.837175 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="galera" probeResult="failure" output="command timed out" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.852942 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfq5r\" (UniqueName: \"kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.853455 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.854283 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.915796 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.937845 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="openstack-network-exporter" containerID="cri-o://adb0722e1140982d66b6bcc4b53d108b1a1da62c36d81757cff1dc3b6b31b52c" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:50.963214 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:50.963647 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-2" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="openstack-network-exporter" containerID="cri-o://8fc531019af740166b849284cf77209f3d16d2b70c219b2f80048bdb08d14be3" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:50.981357 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfq5r\" (UniqueName: \"kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.042712 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.043260 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-1" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="openstack-network-exporter" containerID="cri-o://ff7aad450b5bf148e0d8e2a6a1a41eb2960ad7d591108755ada3cf41b5ab3619" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.054954 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.057676 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-35ea-account-create-update-6jb9f"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.069616 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-35ea-account-create-update-6jb9f"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.089700 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.089782 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data podName:e2c9ab46-3143-4472-a606-cd75def78f41 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:52.089745362 +0000 UTC m=+8004.349056513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data") pod "rabbitmq-cell1-server-0" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41") : configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.123411 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.127870 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="openstack-network-exporter" containerID="cri-o://2ffece60b271290211f6f3963d1642000676cfce31547f3f28dd8ecf96867815" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.218020 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.218662 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-1" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="openstack-network-exporter" containerID="cri-o://6504065e281b8c5e6e76cf9517fba24d633b6c7805c447e42fbc49093a42beeb" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.293209 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.294356 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-2" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="openstack-network-exporter" containerID="cri-o://aa7d63dd9f5d69196ca03f337b7c8a99ee8f9a0db5fd272afae4861760ebba16" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.311628 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="ovsdbserver-sb" containerID="cri-o://4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.332715 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:51 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: if [ -n "" ]; then Mar 20 09:02:51 crc kubenswrapper[5136]: GRANT_DATABASE="" Mar 20 09:02:51 crc kubenswrapper[5136]: else Mar 20 09:02:51 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:51 crc kubenswrapper[5136]: fi Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:51 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:51 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:51 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:51 crc kubenswrapper[5136]: # support updates Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.337615 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-dvqsp" podUID="7660b6b5-094d-4da5-9d34-fe85c863d887" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.354996 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqx9d"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.399503 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqx9d"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.401181 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="ovsdbserver-nb" containerID="cri-o://6359501b6448986da36467b6a23a3fd5909f20740da745780317887a779e734a" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.407054 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-2" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="ovsdbserver-sb" containerID="cri-o://c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.436528 5136 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/nova-scheduler-0" secret="" err="secret \"nova-nova-dockercfg-nn865\" not found" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.436671 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.437088 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon-log" containerID="cri-o://635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.437795 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" containerID="cri-o://5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.453220 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.538861 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.539247 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b494fbb57-cd7nw" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-api" containerID="cri-o://293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.539881 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b494fbb57-cd7nw" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-httpd" containerID="cri-o://b54ae1c896c24440630a7756d526255f3def96dbed5cb096fc4d77997e706367" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.630885 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff is running failed: container process not found" containerID="4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.636852 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.637075 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="dnsmasq-dns" containerID="cri-o://dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299" gracePeriod=10 Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.638178 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff is running failed: container process not found" containerID="4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.643712 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff is running failed: container process not found" containerID="4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.643793 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="ovsdbserver-sb" Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.652441 5136 secret.go:188] Couldn't get secret openstack/nova-scheduler-config-data: secret "nova-scheduler-config-data" not found Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.652514 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data podName:41ed7c59-18ee-44ec-8068-ccc9e82485a6 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:52.152490695 +0000 UTC m=+8004.411801936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data") pod "nova-scheduler-0" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6") : secret "nova-scheduler-config-data" not found Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.696105 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af is running failed: container process not found" containerID="c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.703459 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af is running failed: container process not found" containerID="c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.704746 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af is running failed: container process not found" containerID="c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.704779 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-2" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="ovsdbserver-sb" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.711217 5136 generic.go:334] "Generic (PLEG): container finished" podID="22659681-bc2b-4056-81d6-96b046e45712" containerID="43d8180654ac711b0a6c655f92be552ad6bb0d4e4426596385b695958afa2b74" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.711302 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerDied","Data":"43d8180654ac711b0a6c655f92be552ad6bb0d4e4426596385b695958afa2b74"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.715894 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.719472 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-api" containerID="cri-o://7cef857842ff9d2b9ec6fba6fced2a4a47da1ca6826d17831d4379411662258d" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.719906 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-listener" containerID="cri-o://7e50b645c14d1546ec6ea5f4cc398a09d244fd42ce33f42ece0b4ffe6904f5e2" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.719947 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-notifier" containerID="cri-o://dcd83e13ab91d1b3d212755c407d51e55f888a96ac55ba1a109bfc09166fa35d" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.719977 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-evaluator" containerID="cri-o://2ead91f10403f4d804be86964d57e08ade2602b5155572d57113b707313fe0a4" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.729431 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-lxzxf"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.744697 5136 generic.go:334] "Generic (PLEG): container finished" podID="48418ecc-b768-4848-b663-1a84761f5b32" containerID="6504065e281b8c5e6e76cf9517fba24d633b6c7805c447e42fbc49093a42beeb" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.744757 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerDied","Data":"6504065e281b8c5e6e76cf9517fba24d633b6c7805c447e42fbc49093a42beeb"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.788796 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-wjckm"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.793949 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a276ba4e-bbab-4a83-8fd2-d77573782aa6/ovsdbserver-sb/0.log" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.794005 5136 generic.go:334] "Generic (PLEG): container finished" podID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerID="adb0722e1140982d66b6bcc4b53d108b1a1da62c36d81757cff1dc3b6b31b52c" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.794027 5136 generic.go:334] "Generic (PLEG): container finished" podID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerID="4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" exitCode=143 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.794140 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerDied","Data":"adb0722e1140982d66b6bcc4b53d108b1a1da62c36d81757cff1dc3b6b31b52c"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.794175 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerDied","Data":"4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.807154 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-lxzxf"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.812472 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dvqsp" event={"ID":"7660b6b5-094d-4da5-9d34-fe85c863d887","Type":"ContainerStarted","Data":"451e44515c0cda1b30913a6cdb1ebc4f0813478346503aa740945d429ab0443d"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.813168 5136 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-dvqsp" secret="" err="secret \"galera-openstack-cell1-dockercfg-mtswd\" not found" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.833117 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.833349 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-674ffbb556-dfk75" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-log" containerID="cri-o://b1983019e3cc484fe8f15d4854d502ecd0a69d384bd0f1cd05cd048f9cc159a0" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.833413 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:51 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: if [ -n "" ]; then Mar 20 09:02:51 crc kubenswrapper[5136]: GRANT_DATABASE="" Mar 20 09:02:51 crc kubenswrapper[5136]: else Mar 20 09:02:51 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:51 crc kubenswrapper[5136]: fi Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:51 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:51 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:51 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:51 crc kubenswrapper[5136]: # support updates Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.833482 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-674ffbb556-dfk75" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-api" containerID="cri-o://9849b01109acdf6259a4119de8fce764d067a8b917a7d5b6965b6bd00e1aa60a" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.834879 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-dvqsp" podUID="7660b6b5-094d-4da5-9d34-fe85c863d887" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.862670 5136 generic.go:334] "Generic (PLEG): container finished" podID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerID="ff7aad450b5bf148e0d8e2a6a1a41eb2960ad7d591108755ada3cf41b5ab3619" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.862733 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"7e0c945f-6773-4bf8-872d-7eb5110de79f","Type":"ContainerDied","Data":"ff7aad450b5bf148e0d8e2a6a1a41eb2960ad7d591108755ada3cf41b5ab3619"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.869025 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4fd5c29-d308-41d0-9781-9b6d9625c19c/ovsdbserver-nb/0.log" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.869113 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerID="2ffece60b271290211f6f3963d1642000676cfce31547f3f28dd8ecf96867815" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.869240 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerDied","Data":"2ffece60b271290211f6f3963d1642000676cfce31547f3f28dd8ecf96867815"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.878144 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9/ovsdbserver-sb/0.log" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.878183 5136 generic.go:334] "Generic (PLEG): container finished" podID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerID="8fc531019af740166b849284cf77209f3d16d2b70c219b2f80048bdb08d14be3" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.878198 5136 generic.go:334] "Generic (PLEG): container finished" podID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerID="c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" exitCode=143 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.878252 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerDied","Data":"8fc531019af740166b849284cf77209f3d16d2b70c219b2f80048bdb08d14be3"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.878276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerDied","Data":"c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af"} Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.878836 5136 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.878895 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts podName:7660b6b5-094d-4da5-9d34-fe85c863d887 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:52.378877383 +0000 UTC m=+8004.638188524 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts") pod "root-account-create-update-dvqsp" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887") : configmap "openstack-cell1-scripts" not found Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.885957 5136 generic.go:334] "Generic (PLEG): container finished" podID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerID="aa7d63dd9f5d69196ca03f337b7c8a99ee8f9a0db5fd272afae4861760ebba16" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.886302 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerDied","Data":"aa7d63dd9f5d69196ca03f337b7c8a99ee8f9a0db5fd272afae4861760ebba16"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.918646 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-wjckm"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.973179 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.028745 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.029063 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="cinder-scheduler" containerID="cri-o://49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.029396 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="probe" containerID="cri-o://b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.100460 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.100664 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-log" containerID="cri-o://9e5b58fa90ab6a9a965276a68d4ee135aa252e61fbc159c5d0aa6f6134637333" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.100883 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-httpd" containerID="cri-o://7ecfa88277a19c2fc4a9782c7beb9c21c6c1a5a38d56723b3e67fc0044f8bbb4" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.133877 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.134147 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-log" containerID="cri-o://62b81b8fc1d95273635ec6d0f69c524950ac024e8fd9b6ac1d7381fe6f428b6f" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.134563 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-httpd" containerID="cri-o://8e6a89b054dab23d4263c5fb97b6aba8bc51276e7bd2c8d9be34c61f68879a63" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.152311 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.152553 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api-log" containerID="cri-o://99aa025dc61faebaa87d0e9d2a4856c44ddf012f862d7c369fe941dabbd9836f" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.153760 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "glance" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="glance" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.156106 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api" containerID="cri-o://23592f2e3f685cf11f8e09b90281731a317e3331b51d973536b5b6cf9ce01a69" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.166512 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-6249-account-create-update-mtgp6" podUID="bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.169413 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "neutron" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="neutron" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.173027 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "cinder" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="cinder" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.173098 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-24c6-account-create-update-48knr" podUID="fe703c94-1aec-47a6-81a7-8510ed330866" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.174114 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-b8c9-account-create-update-hh6pb" podUID="536a487a-ae23-4eed-9bc8-221a9b85bed4" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.197589 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.197885 5136 secret.go:188] Couldn't get secret openstack/nova-scheduler-config-data: secret "nova-scheduler-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.197955 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data podName:41ed7c59-18ee-44ec-8068-ccc9e82485a6 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.197941302 +0000 UTC m=+8005.457252453 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data") pod "nova-scheduler-0" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6") : secret "nova-scheduler-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.197996 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.198015 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data podName:e2c9ab46-3143-4472-a606-cd75def78f41 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.198009764 +0000 UTC m=+8006.457320915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data") pod "rabbitmq-cell1-server-0" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41") : configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.206182 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.230583 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.256181 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "nova_cell0" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="nova_cell0" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.261549 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-adbe-account-create-update-b276s" podUID="c532fd14-6718-4c7d-9e38-c68bf7b2da6b" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.297793 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.301147 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.301421 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data podName:804d1bff-7c63-45a1-bf1a-68f3eedb6ac7 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:52.801408065 +0000 UTC m=+8005.060719216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data") pod "rabbitmq-server-0" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7") : configmap "rabbitmq-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.313448 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:52 crc kubenswrapper[5136]: W0320 09:02:52.322159 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod254505fd_2596_4c4a_bf0a_2565e8b3ae5c.slice/crio-11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3 WatchSource:0}: Error finding container 11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3: Status 404 returned error can't find the container with id 11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.325125 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.325369 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" containerName="nova-scheduler-scheduler" containerID="cri-o://fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.329125 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-1" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="ovsdbserver-sb" containerID="cri-o://49b205017f1afa7852c73b13644c48ef8382cbecd7e8c8de906f466d5717a06f" gracePeriod=299 Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.341643 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "nova_api" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="nova_api" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.342679 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.342926 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-log" containerID="cri-o://cd5663a9b617be114b64e32a8582baab8d6015f76d7bc3afb172624a4c98b3c7" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.343339 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-metadata" containerID="cri-o://1e2a347a5b7fd1a421ed6a7c665114567c818ec00d533ffab87b5e587c7ecf89" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.343418 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-c7dc-account-create-update-bslnf" podUID="254505fd-2596-4c4a-bf0a-2565e8b3ae5c" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.363375 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.363578 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-log" containerID="cri-o://69c8be45f764ed420f7bbef558c7c52b3207d932e0c8d1c5e50585f4ba78387d" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.364001 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-api" containerID="cri-o://bd88353eb3bfead6453753b043892b43c76148c22dbdd8749c35d5213cf8d63b" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.384863 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.395993 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-1" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="ovsdbserver-nb" containerID="cri-o://50054ee6901ece61fe4a75813bd8a9abcbe38aad68d2fc8adceaeed3cbddce45" gracePeriod=299 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.396125 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.397515 5136 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/prometheus-metric-storage-0" secret="" err="secret \"metric-storage-prometheus-dockercfg-jt99d\" not found" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.420068 5136 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.420126 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts podName:7660b6b5-094d-4da5-9d34-fe85c863d887 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.420112691 +0000 UTC m=+8005.679423842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts") pod "root-account-create-update-dvqsp" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887") : configmap "openstack-cell1-scripts" not found Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.443604 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1513f332-b5c6-40ca-9c3a-4ef7b1f78672" path="/var/lib/kubelet/pods/1513f332-b5c6-40ca-9c3a-4ef7b1f78672/volumes" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.444350 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" path="/var/lib/kubelet/pods/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9/volumes" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.444870 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db04162b-4913-4acc-b387-d7324202a05b" path="/var/lib/kubelet/pods/db04162b-4913-4acc-b387-d7324202a05b/volumes" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.446502 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9eddce1-1338-489a-b0e9-f008c33fea0f" path="/var/lib/kubelet/pods/f9eddce1-1338-489a-b0e9-f008c33fea0f/volumes" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.462467 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.480027 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-w7sqw"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.489448 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-w7sqw"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.502185 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.508756 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="11508a60-8214-4811-898f-9542eee208d5" containerName="nova-cell0-conductor-conductor" containerID="cri-o://2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.518279 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.520977 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="galera" containerID="cri-o://a2cd799ad38f20f3a20df188a90ca9d10f639dafb3f002a582a1fe8b8331c153" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.523069 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-thanos-prometheus-http-client-file: secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.523181 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.023165101 +0000 UTC m=+8005.282476252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.526131 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-web-config: secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.526208 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.026193195 +0000 UTC m=+8005.285504346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.526758 5136 projected.go:263] Couldn't get secret openstack/prometheus-metric-storage-tls-assets-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.526771 5136 projected.go:194] Error preparing data for projected volume tls-assets for pod openstack/prometheus-metric-storage-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.526793 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.026784653 +0000 UTC m=+8005.286095794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.528250 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage: secret "prometheus-metric-storage" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.528338 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.02832125 +0000 UTC m=+8005.287632401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage" not found Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.531774 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.532429 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="ea7881c5-b719-41b0-8046-249f7fdb6f61" containerName="nova-cell1-conductor-conductor" containerID="cri-o://5df9d903ae57ec8baad2fe6c51be0e13f0c8a558bfc5471ea6ef07feb8e164f7" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.574982 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.591620 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "nova_cell1" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="nova_cell1" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.593059 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" podUID="6d5ec1f6-0809-4582-902e-00638e6e4580" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.715915 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.716446 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.758421 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.759442 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4j4n\" (UniqueName: \"kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n\") pod \"9cefd58c-a889-4893-aa87-b106eae1c7ad\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.759623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle\") pod \"9cefd58c-a889-4893-aa87-b106eae1c7ad\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.759728 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config\") pod \"9cefd58c-a889-4893-aa87-b106eae1c7ad\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.759789 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret\") pod \"9cefd58c-a889-4893-aa87-b106eae1c7ad\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.760309 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/alertmanager-metric-storage-0" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="alertmanager" containerID="cri-o://20f8a9a945087915c09ba9f6c5bb3fad1e06a23db6077bf675c7ee359a2b9ea4" gracePeriod=120 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.765138 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/alertmanager-metric-storage-0" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="config-reloader" containerID="cri-o://c48b0b35287dd607ba880a1efbf0d79170312ce43d17777e16e04be5b17bbe8a" gracePeriod=120 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.773927 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.777401 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.790294 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a276ba4e-bbab-4a83-8fd2-d77573782aa6/ovsdbserver-sb/0.log" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.790366 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.791350 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.805620 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.806843 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n" (OuterVolumeSpecName: "kube-api-access-w4j4n") pod "9cefd58c-a889-4893-aa87-b106eae1c7ad" (UID: "9cefd58c-a889-4893-aa87-b106eae1c7ad"). InnerVolumeSpecName "kube-api-access-w4j4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.857065 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cefd58c-a889-4893-aa87-b106eae1c7ad" (UID: "9cefd58c-a889-4893-aa87-b106eae1c7ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.858182 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9cefd58c-a889-4893-aa87-b106eae1c7ad" (UID: "9cefd58c-a889-4893-aa87-b106eae1c7ad"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869060 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869098 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb\") pod \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869179 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869259 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcrdf\" (UniqueName: \"kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf\") pod \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869293 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc\") pod \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869337 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config\") pod \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869386 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869427 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c7dl\" (UniqueName: \"kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869451 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869468 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb\") pod \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869492 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.870421 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.871339 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config" (OuterVolumeSpecName: "config") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.877310 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.877337 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.877347 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.877358 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4j4n\" (UniqueName: \"kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.877419 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.877466 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data podName:804d1bff-7c63-45a1-bf1a-68f3eedb6ac7 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.877451809 +0000 UTC m=+8006.136762950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data") pod "rabbitmq-server-0" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7") : configmap "rabbitmq-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.878461 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.878688 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts" (OuterVolumeSpecName: "scripts") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.878752 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.882441 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.906206 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf" (OuterVolumeSpecName: "kube-api-access-kcrdf") pod "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" (UID: "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6"). InnerVolumeSpecName "kube-api-access-kcrdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.906354 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.933058 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl" (OuterVolumeSpecName: "kube-api-access-9c7dl") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "kube-api-access-9c7dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.948237 5136 generic.go:334] "Generic (PLEG): container finished" podID="305f3f22-2f38-44c5-8e63-1f028edce331" containerID="b54ae1c896c24440630a7756d526255f3def96dbed5cb096fc4d77997e706367" exitCode=0 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.948309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerDied","Data":"b54ae1c896c24440630a7756d526255f3def96dbed5cb096fc4d77997e706367"} Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.951746 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.979739 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c7dc-account-create-update-bslnf" event={"ID":"254505fd-2596-4c4a-bf0a-2565e8b3ae5c","Type":"ContainerStarted","Data":"11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3"} Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.981858 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c7dl\" (UniqueName: \"kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.981881 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.981912 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") on node \"crc\" " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.981925 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.981938 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcrdf\" (UniqueName: \"kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.000335 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.000948 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "nova_api" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="nova_api" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.002010 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-c7dc-account-create-update-bslnf" podUID="254505fd-2596-4c4a-bf0a-2565e8b3ae5c" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.007016 5136 generic.go:334] "Generic (PLEG): container finished" podID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerID="69c8be45f764ed420f7bbef558c7c52b3207d932e0c8d1c5e50585f4ba78387d" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.007079 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerDied","Data":"69c8be45f764ed420f7bbef558c7c52b3207d932e0c8d1c5e50585f4ba78387d"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.008805 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24c6-account-create-update-48knr" event={"ID":"fe703c94-1aec-47a6-81a7-8510ed330866","Type":"ContainerStarted","Data":"a8837f3aa103623ab06077854fdb2ccb4185d7609e123f11e58958b51d99dfcb"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.015833 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.024057 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="rabbitmq" containerID="cri-o://a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80" gracePeriod=604800 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.026519 5136 generic.go:334] "Generic (PLEG): container finished" podID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerID="b1983019e3cc484fe8f15d4854d502ecd0a69d384bd0f1cd05cd048f9cc159a0" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.026576 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerDied","Data":"b1983019e3cc484fe8f15d4854d502ecd0a69d384bd0f1cd05cd048f9cc159a0"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.032305 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9cefd58c-a889-4893-aa87-b106eae1c7ad" (UID: "9cefd58c-a889-4893-aa87-b106eae1c7ad"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.036222 5136 generic.go:334] "Generic (PLEG): container finished" podID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerID="20f8a9a945087915c09ba9f6c5bb3fad1e06a23db6077bf675c7ee359a2b9ea4" exitCode=0 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.036312 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerDied","Data":"20f8a9a945087915c09ba9f6c5bb3fad1e06a23db6077bf675c7ee359a2b9ea4"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.042676 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6249-account-create-update-mtgp6" event={"ID":"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1","Type":"ContainerStarted","Data":"b9a5100ed7164058172ea0371e785fef7e937e4b5c850e24c27f2580525e965a"} Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.044628 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "glance" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="glance" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.045864 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-6249-account-create-update-mtgp6" podUID="bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.048376 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.050543 5136 generic.go:334] "Generic (PLEG): container finished" podID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerID="62b81b8fc1d95273635ec6d0f69c524950ac024e8fd9b6ac1d7381fe6f428b6f" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.050669 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerDied","Data":"62b81b8fc1d95273635ec6d0f69c524950ac024e8fd9b6ac1d7381fe6f428b6f"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.083986 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_48418ecc-b768-4848-b663-1a84761f5b32/ovsdbserver-nb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.084036 5136 generic.go:334] "Generic (PLEG): container finished" podID="48418ecc-b768-4848-b663-1a84761f5b32" containerID="50054ee6901ece61fe4a75813bd8a9abcbe38aad68d2fc8adceaeed3cbddce45" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.084604 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerDied","Data":"50054ee6901ece61fe4a75813bd8a9abcbe38aad68d2fc8adceaeed3cbddce45"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.087216 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087279 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage: secret "prometheus-metric-storage" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087328 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.087313527 +0000 UTC m=+8006.346624678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087351 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-web-config: secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087411 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.087392049 +0000 UTC m=+8006.346703240 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087545 5136 projected.go:263] Couldn't get secret openstack/prometheus-metric-storage-tls-assets-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087566 5136 projected.go:194] Error preparing data for projected volume tls-assets for pod openstack/prometheus-metric-storage-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087613 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.087597065 +0000 UTC m=+8006.346908216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087629 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-thanos-prometheus-http-client-file: secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087658 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.087650137 +0000 UTC m=+8006.346961278 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.102038 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.103678 5136 generic.go:334] "Generic (PLEG): container finished" podID="9cefd58c-a889-4893-aa87-b106eae1c7ad" containerID="80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423" exitCode=137 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.104132 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.107241 5136 scope.go:117] "RemoveContainer" containerID="80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.118147 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config" (OuterVolumeSpecName: "config") pod "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" (UID: "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.123424 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_7e0c945f-6773-4bf8-872d-7eb5110de79f/ovsdbserver-sb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.123483 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.124734 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4fd5c29-d308-41d0-9781-9b6d9625c19c/ovsdbserver-nb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.124770 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerID="6359501b6448986da36467b6a23a3fd5909f20740da745780317887a779e734a" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.124828 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerDied","Data":"6359501b6448986da36467b6a23a3fd5909f20740da745780317887a779e734a"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.137506 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b8c9-account-create-update-hh6pb" event={"ID":"536a487a-ae23-4eed-9bc8-221a9b85bed4","Type":"ContainerStarted","Data":"c5b0b0c1f851b7ef94e6535ec54695cb38543bd96eb082f0402a43b8aed2912c"} Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.139084 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "cinder" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="cinder" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.139565 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" (UID: "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.139775 5136 generic.go:334] "Generic (PLEG): container finished" podID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerID="cd5663a9b617be114b64e32a8582baab8d6015f76d7bc3afb172624a4c98b3c7" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.141988 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-b8c9-account-create-update-hh6pb" podUID="536a487a-ae23-4eed-9bc8-221a9b85bed4" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.143152 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerDied","Data":"cd5663a9b617be114b64e32a8582baab8d6015f76d7bc3afb172624a4c98b3c7"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.161709 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" (UID: "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.168290 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" (UID: "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.173017 5136 generic.go:334] "Generic (PLEG): container finished" podID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerID="2ead91f10403f4d804be86964d57e08ade2602b5155572d57113b707313fe0a4" exitCode=0 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.173131 5136 generic.go:334] "Generic (PLEG): container finished" podID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerID="7cef857842ff9d2b9ec6fba6fced2a4a47da1ca6826d17831d4379411662258d" exitCode=0 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.173226 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerDied","Data":"2ead91f10403f4d804be86964d57e08ade2602b5155572d57113b707313fe0a4"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.173322 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerDied","Data":"7cef857842ff9d2b9ec6fba6fced2a4a47da1ca6826d17831d4379411662258d"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.183562 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.183711 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980") on node "crc" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.188588 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.190472 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.190966 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191156 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191260 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191461 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xctk\" (UniqueName: \"kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191545 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191669 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.193964 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.194501 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.194590 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.194699 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.194768 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.195317 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.195390 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.197965 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config" (OuterVolumeSpecName: "config") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.203970 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts" (OuterVolumeSpecName: "scripts") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.205899 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-adbe-account-create-update-b276s" event={"ID":"c532fd14-6718-4c7d-9e38-c68bf7b2da6b","Type":"ContainerStarted","Data":"e4994862d77455e776d1f21a360bf4536969a6b61a32dc2dc2986ee9c7770f98"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.205969 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9/ovsdbserver-sb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.206048 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.216420 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a276ba4e-bbab-4a83-8fd2-d77573782aa6/ovsdbserver-sb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.216731 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.216563 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerDied","Data":"56e76ebe8499616e7e38322f1f1fa612b42b6c40fb5b893b8175391424853f23"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.219993 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk" (OuterVolumeSpecName: "kube-api-access-2xctk") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "kube-api-access-2xctk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.225069 5136 generic.go:334] "Generic (PLEG): container finished" podID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerID="9e5b58fa90ab6a9a965276a68d4ee135aa252e61fbc159c5d0aa6f6134637333" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.225183 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerDied","Data":"9e5b58fa90ab6a9a965276a68d4ee135aa252e61fbc159c5d0aa6f6134637333"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.235031 5136 scope.go:117] "RemoveContainer" containerID="80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.235857 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "nova_cell0" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="nova_cell0" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.236579 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423\": container with ID starting with 80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423 not found: ID does not exist" containerID="80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.236610 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423"} err="failed to get container status \"80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423\": rpc error: code = NotFound desc = could not find container \"80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423\": container with ID starting with 80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423 not found: ID does not exist" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.236634 5136 scope.go:117] "RemoveContainer" containerID="adb0722e1140982d66b6bcc4b53d108b1a1da62c36d81757cff1dc3b6b31b52c" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.237077 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-adbe-account-create-update-b276s" podUID="c532fd14-6718-4c7d-9e38-c68bf7b2da6b" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.240165 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" event={"ID":"6d5ec1f6-0809-4582-902e-00638e6e4580","Type":"ContainerStarted","Data":"21dbb634e022f62bb8152e34fda1da571499258dd53011f3f25fcafd54a5990f"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.257050 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_7e0c945f-6773-4bf8-872d-7eb5110de79f/ovsdbserver-sb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.257122 5136 generic.go:334] "Generic (PLEG): container finished" podID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerID="49b205017f1afa7852c73b13644c48ef8382cbecd7e8c8de906f466d5717a06f" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.257231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"7e0c945f-6773-4bf8-872d-7eb5110de79f","Type":"ContainerDied","Data":"49b205017f1afa7852c73b13644c48ef8382cbecd7e8c8de906f466d5717a06f"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.257242 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.292771 5136 generic.go:334] "Generic (PLEG): container finished" podID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerID="dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299" exitCode=0 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.292883 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" event={"ID":"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6","Type":"ContainerDied","Data":"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.292947 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" event={"ID":"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6","Type":"ContainerDied","Data":"8c1f27408a6ade394d6e2ab0bd5959f877552185e88f9a7307e4a1f9978accc6"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.293045 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.299864 5136 generic.go:334] "Generic (PLEG): container finished" podID="9fe5d992-c030-4957-8388-763c8fa32d22" containerID="99aa025dc61faebaa87d0e9d2a4856c44ddf012f862d7c369fe941dabbd9836f" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.299941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerDied","Data":"99aa025dc61faebaa87d0e9d2a4856c44ddf012f862d7c369fe941dabbd9836f"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.298048 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.304844 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.304904 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.305011 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.305040 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdn2l\" (UniqueName: \"kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.305092 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.305540 5136 generic.go:334] "Generic (PLEG): container finished" podID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerID="b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a" exitCode=0 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.306208 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts" (OuterVolumeSpecName: "scripts") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.306624 5136 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-dvqsp" secret="" err="secret \"galera-openstack-cell1-dockercfg-mtswd\" not found" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.306467 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="prometheus" containerID="cri-o://10b1e7850200894bb4a977e72087deccfe8ba698c99b51dc2f4eca430f875e7b" gracePeriod=600 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.307909 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.308066 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.308256 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.309054 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config" (OuterVolumeSpecName: "config") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.307755 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="thanos-sidecar" containerID="cri-o://58ad47341d36382588ada0b35a93996f0bd176fcc8c2283488644a65072e6d6a" gracePeriod=600 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.309733 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="config-reloader" containerID="cri-o://22f88ba3ba68ee1437a03f43cb79cd30dcc82808fe8518d87eaa412f688babdc" gracePeriod=600 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310494 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310529 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310541 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310551 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310562 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310574 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xctk\" (UniqueName: \"kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.310661 5136 secret.go:188] Couldn't get secret openstack/nova-scheduler-config-data: secret "nova-scheduler-config-data" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.310710 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data podName:41ed7c59-18ee-44ec-8068-ccc9e82485a6 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:55.310691322 +0000 UTC m=+8007.570002513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data") pod "nova-scheduler-0" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6") : secret "nova-scheduler-config-data" not found Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.317677 5136 scope.go:117] "RemoveContainer" containerID="4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.319698 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.321180 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-dvqsp" podUID="7660b6b5-094d-4da5-9d34-fe85c863d887" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.326437 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerDied","Data":"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.338204 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.368228 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l" (OuterVolumeSpecName: "kube-api-access-zdn2l") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "kube-api-access-zdn2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.412403 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdn2l\" (UniqueName: \"kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.412452 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.414629 5136 scope.go:117] "RemoveContainer" containerID="ff7aad450b5bf148e0d8e2a6a1a41eb2960ad7d591108755ada3cf41b5ab3619" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.467231 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.476495 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.514593 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") on node \"crc\" " Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.517899 5136 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.518022 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts podName:7660b6b5-094d-4da5-9d34-fe85c863d887 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:55.5179948 +0000 UTC m=+8007.777306011 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts") pod "root-account-create-update-dvqsp" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887") : configmap "openstack-cell1-scripts" not found Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.563445 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.563687 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80") on node "crc" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.578063 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.592309 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_48418ecc-b768-4848-b663-1a84761f5b32/ovsdbserver-nb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.592401 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.605911 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.617752 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619280 5136 scope.go:117] "RemoveContainer" containerID="49b205017f1afa7852c73b13644c48ef8382cbecd7e8c8de906f466d5717a06f" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619577 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619621 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619675 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzrr5\" (UniqueName: \"kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619699 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619856 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619902 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619964 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.620015 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.621631 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: W0320 09:02:53.621704 5136 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a276ba4e-bbab-4a83-8fd2-d77573782aa6/volumes/kubernetes.io~secret/metrics-certs-tls-certs Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.621712 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.621990 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.622010 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.622024 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.622472 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts" (OuterVolumeSpecName: "scripts") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.626010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config" (OuterVolumeSpecName: "config") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.629931 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="rabbitmq" containerID="cri-o://55989c472a0077640a315a8d0db45eb57abfdba8bbd4415c0bb59c9d232cd911" gracePeriod=604800 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.661916 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5" (OuterVolumeSpecName: "kube-api-access-kzrr5") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "kube-api-access-kzrr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.666480 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.681057 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "pvc-373c88d9-f88e-464e-b41f-9f601361fa14". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.699266 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "aodh" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="aodh" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.702018 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"aodh-db-secret\\\" not found\"" pod="openstack/aodh-35ea-account-create-update-7d4sf" podUID="bdff16b6-0410-4448-a15c-3f22f5890d91" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.707982 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725153 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725202 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") on node \"crc\" " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725213 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725224 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725236 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725244 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzrr5\" (UniqueName: \"kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.763949 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919 podName:48418ecc-b768-4848-b663-1a84761f5b32 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.263890993 +0000 UTC m=+8006.523202144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "ovndbcluster-nb-etc-ovn" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.768558 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.792957 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.831191 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.831634 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-373c88d9-f88e-464e-b41f-9f601361fa14" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14") on node "crc" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.837012 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.837050 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.837065 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.858744 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.949592 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.949671 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.949716 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data podName:804d1bff-7c63-45a1-bf1a-68f3eedb6ac7 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:55.949701735 +0000 UTC m=+8008.209012886 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data") pod "rabbitmq-server-0" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7") : configmap "rabbitmq-config-data" not found Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.951968 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.975002 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.975174 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.011016 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.052647 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.052684 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.052693 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.052701 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.131740 5136 scope.go:117] "RemoveContainer" containerID="dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.132652 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.152235 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154319 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage: secret "prometheus-metric-storage" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154365 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:56.154351921 +0000 UTC m=+8008.413663072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154583 5136 projected.go:263] Couldn't get secret openstack/prometheus-metric-storage-tls-assets-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154600 5136 projected.go:194] Error preparing data for projected volume tls-assets for pod openstack/prometheus-metric-storage-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154620 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:56.154614509 +0000 UTC m=+8008.413925660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154654 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-web-config: secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154672 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:56.154666791 +0000 UTC m=+8008.413977942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154713 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-thanos-prometheus-http-client-file: secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154731 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:56.154725283 +0000 UTC m=+8008.414036434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.155592 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.180385 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.183128 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.196670 5136 scope.go:117] "RemoveContainer" containerID="43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.239703 5136 scope.go:117] "RemoveContainer" containerID="dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.240031 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4fd5c29-d308-41d0-9781-9b6d9625c19c/ovsdbserver-nb/0.log" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.240089 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.241563 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299\": container with ID starting with dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299 not found: ID does not exist" containerID="dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.241593 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299"} err="failed to get container status \"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299\": rpc error: code = NotFound desc = could not find container \"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299\": container with ID starting with dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299 not found: ID does not exist" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.241616 5136 scope.go:117] "RemoveContainer" containerID="43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.249303 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01\": container with ID starting with 43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01 not found: ID does not exist" containerID="43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.249384 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01"} err="failed to get container status \"43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01\": rpc error: code = NotFound desc = could not find container \"43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01\": container with ID starting with 43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01 not found: ID does not exist" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.267463 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6756\" (UniqueName: \"kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756\") pod \"fe703c94-1aec-47a6-81a7-8510ed330866\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.267504 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts\") pod \"6d5ec1f6-0809-4582-902e-00638e6e4580\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.267579 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.267688 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj8b7\" (UniqueName: \"kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7\") pod \"6d5ec1f6-0809-4582-902e-00638e6e4580\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.267830 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts\") pod \"fe703c94-1aec-47a6-81a7-8510ed330866\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.268291 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.268329 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data podName:e2c9ab46-3143-4472-a606-cd75def78f41 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:58.26831665 +0000 UTC m=+8010.527627801 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data") pod "rabbitmq-cell1-server-0" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41") : configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.282825 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe703c94-1aec-47a6-81a7-8510ed330866" (UID: "fe703c94-1aec-47a6-81a7-8510ed330866"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.285450 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d5ec1f6-0809-4582-902e-00638e6e4580" (UID: "6d5ec1f6-0809-4582-902e-00638e6e4580"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.294786 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756" (OuterVolumeSpecName: "kube-api-access-b6756") pod "fe703c94-1aec-47a6-81a7-8510ed330866" (UID: "fe703c94-1aec-47a6-81a7-8510ed330866"). InnerVolumeSpecName "kube-api-access-b6756". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.296807 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7" (OuterVolumeSpecName: "kube-api-access-jj8b7") pod "6d5ec1f6-0809-4582-902e-00638e6e4580" (UID: "6d5ec1f6-0809-4582-902e-00638e6e4580"). InnerVolumeSpecName "kube-api-access-jj8b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.303164 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.303362 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-965f7d5f6-cshp2" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-httpd" containerID="cri-o://5e5a77d7952567153e8f93b101532ad12cc95f1597c77efe34c080e974b22447" gracePeriod=30 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.303618 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-965f7d5f6-cshp2" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-server" containerID="cri-o://9a6fe348ea134460d09531b2378ad3abce82d81a5457e369dbee025701fbe318" gracePeriod=30 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.314790 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370177 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370241 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370274 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs\") pod \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370439 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs\") pod \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370479 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data\") pod \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370506 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p255f\" (UniqueName: \"kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370529 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6j8r\" (UniqueName: \"kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r\") pod \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370565 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle\") pod \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370691 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370733 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.378004 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.378105 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.378610 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.379545 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.379564 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6756\" (UniqueName: \"kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.379576 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.379586 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj8b7\" (UniqueName: \"kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.381445 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.381637 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts" (OuterVolumeSpecName: "scripts") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.381950 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config" (OuterVolumeSpecName: "config") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385176 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385477 5136 generic.go:334] "Generic (PLEG): container finished" podID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerID="58ad47341d36382588ada0b35a93996f0bd176fcc8c2283488644a65072e6d6a" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385640 5136 generic.go:334] "Generic (PLEG): container finished" podID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerID="22f88ba3ba68ee1437a03f43cb79cd30dcc82808fe8518d87eaa412f688babdc" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385652 5136 generic.go:334] "Generic (PLEG): container finished" podID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerID="10b1e7850200894bb4a977e72087deccfe8ba698c99b51dc2f4eca430f875e7b" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385615 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerDied","Data":"58ad47341d36382588ada0b35a93996f0bd176fcc8c2283488644a65072e6d6a"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385766 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerDied","Data":"22f88ba3ba68ee1437a03f43cb79cd30dcc82808fe8518d87eaa412f688babdc"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385777 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerDied","Data":"10b1e7850200894bb4a977e72087deccfe8ba698c99b51dc2f4eca430f875e7b"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.392552 5136 generic.go:334] "Generic (PLEG): container finished" podID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" containerID="bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.392622 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65b4b8da-0eda-4a77-aeed-0a6f9350a942","Type":"ContainerDied","Data":"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.392651 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65b4b8da-0eda-4a77-aeed-0a6f9350a942","Type":"ContainerDied","Data":"80db3f58b1ebfbb9a5e2a7946e85f8a9a484a8c2e28fb3c3b16dbcc6876113ea"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.392667 5136 scope.go:117] "RemoveContainer" containerID="bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.392750 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.405989 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f" (OuterVolumeSpecName: "kube-api-access-p255f") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "kube-api-access-p255f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.429920 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r" (OuterVolumeSpecName: "kube-api-access-z6j8r") pod "65b4b8da-0eda-4a77-aeed-0a6f9350a942" (UID: "65b4b8da-0eda-4a77-aeed-0a6f9350a942"). InnerVolumeSpecName "kube-api-access-z6j8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.447043 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70745a35-fe6f-4248-ac87-970763afe00e" path="/var/lib/kubelet/pods/70745a35-fe6f-4248-ac87-970763afe00e/volumes" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.447885 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" path="/var/lib/kubelet/pods/7e0c945f-6773-4bf8-872d-7eb5110de79f/volumes" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.451070 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" path="/var/lib/kubelet/pods/9cefd58c-a889-4893-aa87-b106eae1c7ad/volumes" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.462395 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" path="/var/lib/kubelet/pods/a276ba4e-bbab-4a83-8fd2-d77573782aa6/volumes" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.463033 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" path="/var/lib/kubelet/pods/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6/volumes" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.464982 5136 generic.go:334] "Generic (PLEG): container finished" podID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerID="c48b0b35287dd607ba880a1efbf0d79170312ce43d17777e16e04be5b17bbe8a" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.465192 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.485832 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerDied","Data":"c48b0b35287dd607ba880a1efbf0d79170312ce43d17777e16e04be5b17bbe8a"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.485882 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486275 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486290 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486308 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486316 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486328 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486335 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486348 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486355 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486373 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486378 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486390 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="dnsmasq-dns" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486396 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="dnsmasq-dns" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486404 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486409 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486423 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="init" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486430 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="init" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486439 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486445 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486453 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486460 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486471 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486476 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486483 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" containerName="nova-scheduler-scheduler" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486489 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" containerName="nova-scheduler-scheduler" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486498 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486504 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486511 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486516 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486683 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486695 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486701 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" containerName="nova-scheduler-scheduler" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486714 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486725 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486735 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486743 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="dnsmasq-dns" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486753 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486764 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486773 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486781 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486790 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486801 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.487361 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.487376 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3e97-account-create-update-6qvr2"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.487387 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3e97-account-create-update-6qvr2"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.487470 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.490075 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-35ea-account-create-update-7d4sf" event={"ID":"bdff16b6-0410-4448-a15c-3f22f5890d91","Type":"ContainerStarted","Data":"420f494bfdb1a8eee2db9127145e2ede22bd33f1f2aa87cae0d3548617967daa"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.492436 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.505878 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3e97-account-create-update-tkzc8"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.507332 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.510349 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.513659 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.513702 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p255f\" (UniqueName: \"kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.513715 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6j8r\" (UniqueName: \"kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.513725 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.513737 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.528175 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerID="a2cd799ad38f20f3a20df188a90ca9d10f639dafb3f002a582a1fe8b8331c153" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.528268 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerDied","Data":"a2cd799ad38f20f3a20df188a90ca9d10f639dafb3f002a582a1fe8b8331c153"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.542679 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24c6-account-create-update-48knr" event={"ID":"fe703c94-1aec-47a6-81a7-8510ed330866","Type":"ContainerDied","Data":"a8837f3aa103623ab06077854fdb2ccb4185d7609e123f11e58958b51d99dfcb"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.542773 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.564046 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-gnx9m"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.591148 5136 scope.go:117] "RemoveContainer" containerID="bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.591254 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" event={"ID":"6d5ec1f6-0809-4582-902e-00638e6e4580","Type":"ContainerDied","Data":"21dbb634e022f62bb8152e34fda1da571499258dd53011f3f25fcafd54a5990f"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.591333 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.594593 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961\": container with ID starting with bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961 not found: ID does not exist" containerID="bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.594650 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961"} err="failed to get container status \"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961\": rpc error: code = NotFound desc = could not find container \"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961\": container with ID starting with bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961 not found: ID does not exist" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.601875 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-gnx9m"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.614897 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pchhm\" (UniqueName: \"kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm\") pod \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.615334 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data\") pod \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.615370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle\") pod \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.615751 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwswp\" (UniqueName: \"kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.615796 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.616526 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.616600 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfccs\" (UniqueName: \"kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.623363 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.624929 5136 generic.go:334] "Generic (PLEG): container finished" podID="ea7881c5-b719-41b0-8046-249f7fdb6f61" containerID="5df9d903ae57ec8baad2fe6c51be0e13f0c8a558bfc5471ea6ef07feb8e164f7" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.624998 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea7881c5-b719-41b0-8046-249f7fdb6f61","Type":"ContainerDied","Data":"5df9d903ae57ec8baad2fe6c51be0e13f0c8a558bfc5471ea6ef07feb8e164f7"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.627882 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3e97-account-create-update-tkzc8"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.636501 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4fd5c29-d308-41d0-9781-9b6d9625c19c/ovsdbserver-nb/0.log" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.636590 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerDied","Data":"5870d6b24a1657a079a89b9e9211d461a22b66269225de506dabd34bacc879f1"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.636624 5136 scope.go:117] "RemoveContainer" containerID="2ffece60b271290211f6f3963d1642000676cfce31547f3f28dd8ecf96867815" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.636726 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.638804 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm" (OuterVolumeSpecName: "kube-api-access-pchhm") pod "41ed7c59-18ee-44ec-8068-ccc9e82485a6" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6"). InnerVolumeSpecName "kube-api-access-pchhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.647283 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.647530 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" containerID="cri-o://72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" gracePeriod=60 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.648264 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9/ovsdbserver-sb/0.log" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.648381 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerDied","Data":"e68e48705b4cdb3e57af6e933adb8006e7437ee5218f249bb6e11769fe0ee800"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.648477 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.655697 5136 generic.go:334] "Generic (PLEG): container finished" podID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" containerID="fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.655785 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41ed7c59-18ee-44ec-8068-ccc9e82485a6","Type":"ContainerDied","Data":"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.655875 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.693940 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_48418ecc-b768-4848-b663-1a84761f5b32/ovsdbserver-nb/0.log" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.694455 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.701337 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerDied","Data":"e05bc317ee3c118e98b670a0ea0d818712ef16b644c13d4a02ef27c03d16c608"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.718096 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736206 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736280 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736376 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736428 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62847\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736465 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736579 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736604 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736727 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736752 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736781 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736868 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736899 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.737164 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.737215 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfccs\" (UniqueName: \"kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.737424 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwswp\" (UniqueName: \"kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.737491 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.737658 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pchhm\" (UniqueName: \"kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.739921 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "pvc-247c0e17-cb61-42ba-9ec9-5459166a5919". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.740190 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.745527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.755953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.758226 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.763305 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.764677 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.771133 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.776005 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.785936 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3e97-account-create-update-tkzc8"] Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.786877 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-pfccs], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/heat-3e97-account-create-update-tkzc8" podUID="28644e17-7977-4824-aa44-364f4558d0ad" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.795548 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-2bbqx"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.803785 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-2bbqx"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.805973 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.818933 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.819129 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerName="heat-cfnapi" containerID="cri-o://3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5" gracePeriod=60 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.822020 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847" (OuterVolumeSpecName: "kube-api-access-62847") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "kube-api-access-62847". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.822743 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfccs\" (UniqueName: \"kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.822854 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config" (OuterVolumeSpecName: "config") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.824161 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out" (OuterVolumeSpecName: "config-out") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.824258 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.826900 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwswp\" (UniqueName: \"kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.835551 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840483 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840508 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62847\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840519 5136 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840540 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840563 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") on node \"crc\" " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840577 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840586 5136 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840596 5136 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840607 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840615 5136 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840630 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") on node \"crc\" " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840639 5136 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840649 5136 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.841936 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.842142 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7dbf74ffb7-gw5nj" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerName="heat-api" containerID="cri-o://c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be" gracePeriod=60 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.856579 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.933765 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.961916 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.998008 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41ed7c59-18ee-44ec-8068-ccc9e82485a6" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.023244 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.032901 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.046432 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.060010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.065767 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data" (OuterVolumeSpecName: "config-data") pod "65b4b8da-0eda-4a77-aeed-0a6f9350a942" (UID: "65b4b8da-0eda-4a77-aeed-0a6f9350a942"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.071107 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34 podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:55.571082544 +0000 UTC m=+8007.830393695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "prometheus-metric-storage-db" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.105551 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "65b4b8da-0eda-4a77-aeed-0a6f9350a942" (UID: "65b4b8da-0eda-4a77-aeed-0a6f9350a942"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.155001 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.155318 5136 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.155334 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.187648 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data" (OuterVolumeSpecName: "config-data") pod "41ed7c59-18ee-44ec-8068-ccc9e82485a6" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.229355 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.48:5671: connect: connection refused" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.261123 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.262361 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-247c0e17-cb61-42ba-9ec9-5459166a5919" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919") on node "crc" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.261507 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.332557 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.1.154:9292/healthcheck\": read tcp 10.217.0.2:57768->10.217.1.154:9292: read: connection reset by peer" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.332702 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.1.154:9292/healthcheck\": read tcp 10.217.0.2:57766->10.217.1.154:9292: read: connection reset by peer" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.337100 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.337270 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81") on node "crc" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.339833 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65b4b8da-0eda-4a77-aeed-0a6f9350a942" (UID: "65b4b8da-0eda-4a77-aeed-0a6f9350a942"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.365181 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.365207 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.365218 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.371946 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "65b4b8da-0eda-4a77-aeed-0a6f9350a942" (UID: "65b4b8da-0eda-4a77-aeed-0a6f9350a942"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.383176 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.1.153:9292/healthcheck\": read tcp 10.217.0.2:37904->10.217.1.153:9292: read: connection reset by peer" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.383469 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.1.153:9292/healthcheck\": read tcp 10.217.0.2:37898->10.217.1.153:9292: read: connection reset by peer" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.469287 5136 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.473933 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.517023 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.528918 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config" (OuterVolumeSpecName: "web-config") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.571160 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.571182 5136 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.571192 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.571249 5136 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.571295 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts podName:7660b6b5-094d-4da5-9d34-fe85c863d887 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:59.57128137 +0000 UTC m=+8011.830592521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts") pod "root-account-create-update-dvqsp" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887") : configmap "openstack-cell1-scripts" not found Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.594193 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.595588 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.596703 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.596738 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.635985 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.104:8776/healthcheck\": read tcp 10.217.0.2:48976->10.217.1.104:8776: read: connection reset by peer" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.673137 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.720261 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "pvc-d394ecb7-8fc1-4066-a753-5896ab167a34". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.754762 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerDied","Data":"441fbbe54f0f16d0c91d190250a9aea863086641a75258b3146f19278093050a"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.754917 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.768507 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-adbe-account-create-update-b276s" event={"ID":"c532fd14-6718-4c7d-9e38-c68bf7b2da6b","Type":"ContainerDied","Data":"e4994862d77455e776d1f21a360bf4536969a6b61a32dc2dc2986ee9c7770f98"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.768551 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4994862d77455e776d1f21a360bf4536969a6b61a32dc2dc2986ee9c7770f98" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.774373 5136 generic.go:334] "Generic (PLEG): container finished" podID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerID="bd88353eb3bfead6453753b043892b43c76148c22dbdd8749c35d5213cf8d63b" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.774592 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerDied","Data":"bd88353eb3bfead6453753b043892b43c76148c22dbdd8749c35d5213cf8d63b"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.776211 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") on node \"crc\" " Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.793423 5136 generic.go:334] "Generic (PLEG): container finished" podID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerID="9849b01109acdf6259a4119de8fce764d067a8b917a7d5b6965b6bd00e1aa60a" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.793557 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerDied","Data":"9849b01109acdf6259a4119de8fce764d067a8b917a7d5b6965b6bd00e1aa60a"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.795435 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b8c9-account-create-update-hh6pb" event={"ID":"536a487a-ae23-4eed-9bc8-221a9b85bed4","Type":"ContainerDied","Data":"c5b0b0c1f851b7ef94e6535ec54695cb38543bd96eb082f0402a43b8aed2912c"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.795462 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5b0b0c1f851b7ef94e6535ec54695cb38543bd96eb082f0402a43b8aed2912c" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.803626 5136 generic.go:334] "Generic (PLEG): container finished" podID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerID="9a6fe348ea134460d09531b2378ad3abce82d81a5457e369dbee025701fbe318" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.804020 5136 generic.go:334] "Generic (PLEG): container finished" podID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerID="5e5a77d7952567153e8f93b101532ad12cc95f1597c77efe34c080e974b22447" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.804151 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerDied","Data":"9a6fe348ea134460d09531b2378ad3abce82d81a5457e369dbee025701fbe318"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.804274 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerDied","Data":"5e5a77d7952567153e8f93b101532ad12cc95f1597c77efe34c080e974b22447"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.817930 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerDied","Data":"b199eb0de00dd4b5665ed78ec596b43fb8d24fc2002bd3f4dd356a32c51b4138"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.817970 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b199eb0de00dd4b5665ed78ec596b43fb8d24fc2002bd3f4dd356a32c51b4138" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.820394 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.820561 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d394ecb7-8fc1-4066-a753-5896ab167a34" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34") on node "crc" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.820653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c7dc-account-create-update-bslnf" event={"ID":"254505fd-2596-4c4a-bf0a-2565e8b3ae5c","Type":"ContainerDied","Data":"11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.820698 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.827316 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea7881c5-b719-41b0-8046-249f7fdb6f61","Type":"ContainerDied","Data":"0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.827356 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.830739 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dvqsp" event={"ID":"7660b6b5-094d-4da5-9d34-fe85c863d887","Type":"ContainerDied","Data":"451e44515c0cda1b30913a6cdb1ebc4f0813478346503aa740945d429ab0443d"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.830776 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451e44515c0cda1b30913a6cdb1ebc4f0813478346503aa740945d429ab0443d" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.832865 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6249-account-create-update-mtgp6" event={"ID":"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1","Type":"ContainerDied","Data":"b9a5100ed7164058172ea0371e785fef7e937e4b5c850e24c27f2580525e965a"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.832907 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a5100ed7164058172ea0371e785fef7e937e4b5c850e24c27f2580525e965a" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.835117 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerDied","Data":"8744afbb6fc5b78de44cc1ad3a2d2c06bbc7e574d3d24b9b63a1c4c9c4199a2b"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.835143 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8744afbb6fc5b78de44cc1ad3a2d2c06bbc7e574d3d24b9b63a1c4c9c4199a2b" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.837777 5136 generic.go:334] "Generic (PLEG): container finished" podID="9fe5d992-c030-4957-8388-763c8fa32d22" containerID="23592f2e3f685cf11f8e09b90281731a317e3331b51d973536b5b6cf9ce01a69" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.837839 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerDied","Data":"23592f2e3f685cf11f8e09b90281731a317e3331b51d973536b5b6cf9ce01a69"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.840504 5136 generic.go:334] "Generic (PLEG): container finished" podID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerID="8e6a89b054dab23d4263c5fb97b6aba8bc51276e7bd2c8d9be34c61f68879a63" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.840556 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerDied","Data":"8e6a89b054dab23d4263c5fb97b6aba8bc51276e7bd2c8d9be34c61f68879a63"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.843680 5136 generic.go:334] "Generic (PLEG): container finished" podID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerID="7ecfa88277a19c2fc4a9782c7beb9c21c6c1a5a38d56723b3e67fc0044f8bbb4" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.843765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerDied","Data":"7ecfa88277a19c2fc4a9782c7beb9c21c6c1a5a38d56723b3e67fc0044f8bbb4"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.845453 5136 generic.go:334] "Generic (PLEG): container finished" podID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerID="5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.845535 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerDied","Data":"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.846464 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-35ea-account-create-update-7d4sf" event={"ID":"bdff16b6-0410-4448-a15c-3f22f5890d91","Type":"ContainerDied","Data":"420f494bfdb1a8eee2db9127145e2ede22bd33f1f2aa87cae0d3548617967daa"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.846489 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="420f494bfdb1a8eee2db9127145e2ede22bd33f1f2aa87cae0d3548617967daa" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.848534 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41ed7c59-18ee-44ec-8068-ccc9e82485a6","Type":"ContainerDied","Data":"78428f788f49ec304b60513d248e5c1585ac2ca613eb54e675058865189a70f5"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.848586 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.878010 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.994879 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.994955 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data podName:804d1bff-7c63-45a1-bf1a-68f3eedb6ac7 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:59.994940797 +0000 UTC m=+8012.254251948 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data") pod "rabbitmq-server-0" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7") : configmap "rabbitmq-config-data" not found Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.047252 5136 scope.go:117] "RemoveContainer" containerID="6359501b6448986da36467b6a23a3fd5909f20740da745780317887a779e734a" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.049349 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.060763 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.070607 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.090467 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095391 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095430 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095452 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095485 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095527 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095561 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095593 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrfj8\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095647 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095752 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096226 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096287 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096335 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92bl4\" (UniqueName: \"kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4\") pod \"ea7881c5-b719-41b0-8046-249f7fdb6f61\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096373 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6mxw\" (UniqueName: \"kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096398 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096419 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle\") pod \"ea7881c5-b719-41b0-8046-249f7fdb6f61\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096448 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data\") pod \"ea7881c5-b719-41b0-8046-249f7fdb6f61\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096493 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096513 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096541 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.098367 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.098889 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.099014 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db" (OuterVolumeSpecName: "alertmanager-metric-storage-db") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "alertmanager-metric-storage-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.099220 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.099756 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.115176 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.116857 5136 scope.go:117] "RemoveContainer" containerID="8fc531019af740166b849284cf77209f3d16d2b70c219b2f80048bdb08d14be3" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.123228 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.144384 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw" (OuterVolumeSpecName: "kube-api-access-c6mxw") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "kube-api-access-c6mxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.146449 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.148239 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out" (OuterVolumeSpecName: "config-out") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.149194 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.150574 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8" (OuterVolumeSpecName: "kube-api-access-rrfj8") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "kube-api-access-rrfj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.151027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4" (OuterVolumeSpecName: "kube-api-access-92bl4") pod "ea7881c5-b719-41b0-8046-249f7fdb6f61" (UID: "ea7881c5-b719-41b0-8046-249f7fdb6f61"). InnerVolumeSpecName "kube-api-access-92bl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.155057 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.49:5671: connect: connection refused" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.157409 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.168806 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.173313 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.176340 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.183746 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.193790 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198131 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfq5r\" (UniqueName: \"kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r\") pod \"bdff16b6-0410-4448-a15c-3f22f5890d91\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts\") pod \"bdff16b6-0410-4448-a15c-3f22f5890d91\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198246 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts\") pod \"536a487a-ae23-4eed-9bc8-221a9b85bed4\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198317 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts\") pod \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198344 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts\") pod \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198419 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-475lv\" (UniqueName: \"kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv\") pod \"536a487a-ae23-4eed-9bc8-221a9b85bed4\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198504 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcj6l\" (UniqueName: \"kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l\") pod \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198526 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfzdq\" (UniqueName: \"kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq\") pod \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198978 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198989 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198998 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199006 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199015 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrfj8\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199024 5136 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199032 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92bl4\" (UniqueName: \"kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199040 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6mxw\" (UniqueName: \"kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199050 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199057 5136 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199065 5136 reconciler_common.go:293] "Volume detached for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.206125 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "254505fd-2596-4c4a-bf0a-2565e8b3ae5c" (UID: "254505fd-2596-4c4a-bf0a-2565e8b3ae5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.208851 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1" (UID: "bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.208917 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.208976 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "536a487a-ae23-4eed-9bc8-221a9b85bed4" (UID: "536a487a-ae23-4eed-9bc8-221a9b85bed4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.209141 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdff16b6-0410-4448-a15c-3f22f5890d91" (UID: "bdff16b6-0410-4448-a15c-3f22f5890d91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.209335 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l" (OuterVolumeSpecName: "kube-api-access-hcj6l") pod "bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1" (UID: "bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1"). InnerVolumeSpecName "kube-api-access-hcj6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.209459 5136 scope.go:117] "RemoveContainer" containerID="c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.213230 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.215227 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv" (OuterVolumeSpecName: "kube-api-access-475lv") pod "536a487a-ae23-4eed-9bc8-221a9b85bed4" (UID: "536a487a-ae23-4eed-9bc8-221a9b85bed4"). InnerVolumeSpecName "kube-api-access-475lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.217243 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq" (OuterVolumeSpecName: "kube-api-access-rfzdq") pod "254505fd-2596-4c4a-bf0a-2565e8b3ae5c" (UID: "254505fd-2596-4c4a-bf0a-2565e8b3ae5c"). InnerVolumeSpecName "kube-api-access-rfzdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.222725 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r" (OuterVolumeSpecName: "kube-api-access-cfq5r") pod "bdff16b6-0410-4448-a15c-3f22f5890d91" (UID: "bdff16b6-0410-4448-a15c-3f22f5890d91"). InnerVolumeSpecName "kube-api-access-cfq5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.226470 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.228595 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.229065 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.236677 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.244778 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.245153 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52" (OuterVolumeSpecName: "mysql-db") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.246609 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674ffbb556-dfk75" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.253088 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.261451 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.269085 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.269429 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.274196 5136 scope.go:117] "RemoveContainer" containerID="fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.277511 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.279709 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea7881c5-b719-41b0-8046-249f7fdb6f61" (UID: "ea7881c5-b719-41b0-8046-249f7fdb6f61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.302310 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config" (OuterVolumeSpecName: "cluster-tls-config") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "cluster-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.304979 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305025 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305071 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfccs\" (UniqueName: \"kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs\") pod \"28644e17-7977-4824-aa44-364f4558d0ad\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305112 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305130 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305160 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305190 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305215 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7bfx\" (UniqueName: \"kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305238 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305261 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305286 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305313 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305339 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305382 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305407 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxpnx\" (UniqueName: \"kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305431 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305463 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305497 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts\") pod \"7660b6b5-094d-4da5-9d34-fe85c863d887\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305558 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305588 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305617 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts\") pod \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305646 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305680 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c948k\" (UniqueName: \"kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k\") pod \"7660b6b5-094d-4da5-9d34-fe85c863d887\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305707 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305759 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305787 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgnb2\" (UniqueName: \"kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2\") pod \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305831 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rjh6\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305873 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305898 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305926 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305956 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305980 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4l7c\" (UniqueName: \"kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306000 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts\") pod \"28644e17-7977-4824-aa44-364f4558d0ad\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306024 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306137 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data" (OuterVolumeSpecName: "config-data") pod "ea7881c5-b719-41b0-8046-249f7fdb6f61" (UID: "ea7881c5-b719-41b0-8046-249f7fdb6f61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306503 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306522 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306535 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306546 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306557 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306568 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306581 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-475lv\" (UniqueName: \"kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306596 5136 reconciler_common.go:293] "Volume detached for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306609 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcj6l\" (UniqueName: \"kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306621 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfzdq\" (UniqueName: \"kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306648 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") on node \"crc\" " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306661 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfq5r\" (UniqueName: \"kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.308091 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.308841 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx" (OuterVolumeSpecName: "kube-api-access-v7bfx") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "kube-api-access-v7bfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.308857 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.310279 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.314776 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.321028 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.321959 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6" (OuterVolumeSpecName: "kube-api-access-5rjh6") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "kube-api-access-5rjh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.327555 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs" (OuterVolumeSpecName: "kube-api-access-pfccs") pod "28644e17-7977-4824-aa44-364f4558d0ad" (UID: "28644e17-7977-4824-aa44-364f4558d0ad"). InnerVolumeSpecName "kube-api-access-pfccs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.328009 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts" (OuterVolumeSpecName: "scripts") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.328829 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs" (OuterVolumeSpecName: "logs") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.329113 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs" (OuterVolumeSpecName: "logs") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.329459 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs" (OuterVolumeSpecName: "logs") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.329801 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c532fd14-6718-4c7d-9e38-c68bf7b2da6b" (UID: "c532fd14-6718-4c7d-9e38-c68bf7b2da6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.330083 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2" (OuterVolumeSpecName: "kube-api-access-sgnb2") pod "c532fd14-6718-4c7d-9e38-c68bf7b2da6b" (UID: "c532fd14-6718-4c7d-9e38-c68bf7b2da6b"). InnerVolumeSpecName "kube-api-access-sgnb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.330128 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7660b6b5-094d-4da5-9d34-fe85c863d887" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.334231 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c" (OuterVolumeSpecName: "kube-api-access-c4l7c") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "kube-api-access-c4l7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.342534 5136 scope.go:117] "RemoveContainer" containerID="6504065e281b8c5e6e76cf9517fba24d633b6c7805c447e42fbc49093a42beeb" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.343454 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx" (OuterVolumeSpecName: "kube-api-access-fxpnx") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "kube-api-access-fxpnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.344025 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28644e17-7977-4824-aa44-364f4558d0ad" (UID: "28644e17-7977-4824-aa44-364f4558d0ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.343424 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts" (OuterVolumeSpecName: "scripts") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.379987 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config" (OuterVolumeSpecName: "web-config") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.405765 5136 scope.go:117] "RemoveContainer" containerID="50054ee6901ece61fe4a75813bd8a9abcbe38aad68d2fc8adceaeed3cbddce45" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.421859 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.422089 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.422788 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7bfx\" (UniqueName: \"kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.422885 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.422944 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423014 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423073 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxpnx\" (UniqueName: \"kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423255 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423345 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423413 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423727 5136 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.425896 5136 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426040 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgnb2\" (UniqueName: \"kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426123 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rjh6\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426188 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426253 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426329 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426399 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4l7c\" (UniqueName: \"kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426460 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.428603 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfccs\" (UniqueName: \"kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.548911 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k" (OuterVolumeSpecName: "kube-api-access-c948k") pod "7660b6b5-094d-4da5-9d34-fe85c863d887" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887"). InnerVolumeSpecName "kube-api-access-c948k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.549028 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.571532 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c948k\" (UniqueName: \"kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.571586 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.588965 5136 scope.go:117] "RemoveContainer" containerID="58ad47341d36382588ada0b35a93996f0bd176fcc8c2283488644a65072e6d6a" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.597253 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" path="/var/lib/kubelet/pods/41ed7c59-18ee-44ec-8068-ccc9e82485a6/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.609454 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48418ecc-b768-4848-b663-1a84761f5b32" path="/var/lib/kubelet/pods/48418ecc-b768-4848-b663-1a84761f5b32/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.610183 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" path="/var/lib/kubelet/pods/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.626674 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" path="/var/lib/kubelet/pods/65b4b8da-0eda-4a77-aeed-0a6f9350a942/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.627185 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5ec1f6-0809-4582-902e-00638e6e4580" path="/var/lib/kubelet/pods/6d5ec1f6-0809-4582-902e-00638e6e4580/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.627574 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0f0206-8535-4184-ae20-349019be47b2" path="/var/lib/kubelet/pods/7f0f0206-8535-4184-ae20-349019be47b2/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.642570 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87521532-0534-4e37-9c80-809877f2a744" path="/var/lib/kubelet/pods/87521532-0534-4e37-9c80-809877f2a744/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.657687 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" path="/var/lib/kubelet/pods/c5271b0d-ac1b-480c-b4b8-3b634246ae62/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.658599 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f2ce8c-5295-423c-a81f-511d7abd0495" path="/var/lib/kubelet/pods/d5f2ce8c-5295-423c-a81f-511d7abd0495/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.663919 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.664294 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52") on node "crc" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.670306 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" path="/var/lib/kubelet/pods/f4fd5c29-d308-41d0-9781-9b6d9625c19c/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.674065 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe703c94-1aec-47a6-81a7-8510ed330866" path="/var/lib/kubelet/pods/fe703c94-1aec-47a6-81a7-8510ed330866/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.676771 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.781694 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8443: connect: connection refused" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.790986 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data" (OuterVolumeSpecName: "config-data") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.806008 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.845010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.871685 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.877383 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.882320 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.882365 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.882377 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.896147 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674ffbb556-dfk75" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.919320 5136 generic.go:334] "Generic (PLEG): container finished" podID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerID="1e2a347a5b7fd1a421ed6a7c665114567c818ec00d533ffab87b5e587c7ecf89" exitCode=0 Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.922716 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data" (OuterVolumeSpecName: "config-data") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.925432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.925893 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.926901 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.935396 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.941129 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.944030 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data" (OuterVolumeSpecName: "config-data") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.955482 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.959021 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.965526 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.966945 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.968275 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.969486 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.969922 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.975892 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.976908 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.977468 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983861 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983886 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983896 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983904 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983913 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983922 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.023637 5136 scope.go:117] "RemoveContainer" containerID="22f88ba3ba68ee1437a03f43cb79cd30dcc82808fe8518d87eaa412f688babdc" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.059518 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.085379 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.086177 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerDied","Data":"118347a18261bbd9231e7dfffb48c3b8dac9d276aacba0e195348777f97cd490"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.088784 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.088855 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerDied","Data":"0bd47d70acb181d12064205fc44377458fa88b9280bd44fea4624c5f756f1398"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.088871 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerDied","Data":"102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.088885 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.088897 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerDied","Data":"1e2a347a5b7fd1a421ed6a7c665114567c818ec00d533ffab87b5e587c7ecf89"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.089660 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.090367 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.091404 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.091846 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="sg-core" containerID="cri-o://22214e3addd0a5c3b338ef171790692102833676b69426afd997304cb1243d2d" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.091922 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="proxy-httpd" containerID="cri-o://65f0fcd421f6ec548cf9de08170a35a1209f40b76c2fe57dae5b8d4eb78f76fb" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.092927 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-notification-agent" containerID="cri-o://d9e0d70b1ab5d2268043ec21cc179228d03e79f0c594fbabbe78f8b02d15cad9" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093342 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" containerName="kube-state-metrics" containerID="cri-o://aa45ed833f1221b9bb131eddde949a0bc22ff821678a36dab8182db02d897f83" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093401 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-central-agent" containerID="cri-o://bdb39aa61401fd83441079957dca9d830d4f38dae3c0e327bcfc878794649036" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093642 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerDied","Data":"d98661f6d09e3f930d057fd7583b14591e52595630d9c487ff1f51b1c2eb81d0"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerDied","Data":"e5d5c7c5c5992aa7583b39735a8b9b809168a5e579da62b57e92455fa830342d"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093718 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerDied","Data":"b4041a772e07ae38dd21f6daf35e3b02ea073600ec0a68c9ba11fe62a374af18"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093742 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3614-account-create-update-w7hnn"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094270 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="mysql-bootstrap" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094287 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="mysql-bootstrap" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094301 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094307 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094320 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094326 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-api" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094338 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="init-config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094344 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="init-config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094356 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="galera" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094361 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="galera" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094370 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094376 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094385 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094391 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094404 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094412 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094472 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094479 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094490 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094495 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-log" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094503 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098739 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098771 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="prometheus" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098820 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="prometheus" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098834 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098840 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098850 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098858 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-api" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098893 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="thanos-sidecar" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098900 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="thanos-sidecar" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098909 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7881c5-b719-41b0-8046-249f7fdb6f61" containerName="nova-cell1-conductor-conductor" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098914 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7881c5-b719-41b0-8046-249f7fdb6f61" containerName="nova-cell1-conductor-conductor" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098924 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="init-config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098931 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="init-config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098945 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="alertmanager" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098996 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="alertmanager" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.099017 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.099024 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.099085 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-server" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.099091 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-server" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.099099 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.099149 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.099159 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.099170 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100163 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100189 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100224 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100235 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100245 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100255 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7881c5-b719-41b0-8046-249f7fdb6f61" containerName="nova-cell1-conductor-conductor" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100266 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100278 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100307 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100317 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100324 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100331 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="thanos-sidecar" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100341 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="galera" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100351 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100378 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="alertmanager" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100390 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="prometheus" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100399 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-server" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100411 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100418 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101097 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101185 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3614-account-create-update-w7hnn"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101197 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101295 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cron-29566621-n7g7j"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101369 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cron-29566621-n7g7j"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101386 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101445 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3614-account-create-update-w7hnn"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101515 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.104404 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-69dd969bf5-bw8cr" podUID="6492170d-c425-4bc1-8f26-b002ade2a30a" containerName="keystone-api" containerID="cri-o://0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.104543 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-w7hnn" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.110705 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" containerName="memcached" containerID="cri-o://6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.116954 5136 scope.go:117] "RemoveContainer" containerID="10b1e7850200894bb4a977e72087deccfe8ba698c99b51dc2f4eca430f875e7b" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.118526 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data" (OuterVolumeSpecName: "config-data") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.184012 5136 scope.go:117] "RemoveContainer" containerID="9a1c12a00febf49412d459684283c5a7557c491ef07270676850c0c1b6e79a69" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.186701 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt97q\" (UniqueName: \"kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.186733 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.186775 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187033 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187052 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg2m5\" (UniqueName: \"kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187069 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187088 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187120 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187149 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187179 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187198 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.198432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.198793 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs" (OuterVolumeSpecName: "logs") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.200459 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.208611 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts" (OuterVolumeSpecName: "scripts") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.208641 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.209027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.210496 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q" (OuterVolumeSpecName: "kube-api-access-pt97q") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "kube-api-access-pt97q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.210827 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.215099 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.217302 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.217370 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="11508a60-8214-4811-898f-9542eee208d5" containerName="nova-cell0-conductor-conductor" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.218182 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.218694 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.243101 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5" (OuterVolumeSpecName: "kube-api-access-hg2m5") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "kube-api-access-hg2m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.245745 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.250701 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:57 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: if [ -n "" ]; then Mar 20 09:02:57 crc kubenswrapper[5136]: GRANT_DATABASE="" Mar 20 09:02:57 crc kubenswrapper[5136]: else Mar 20 09:02:57 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:57 crc kubenswrapper[5136]: fi Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:57 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:57 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:57 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:57 crc kubenswrapper[5136]: # support updates Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.252171 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-qtl55" podUID="0e905e98-1ffd-4a08-bf51-e89f2d589595" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.303279 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.309994 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.319973 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320132 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320319 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320368 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320422 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320458 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320859 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320874 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320884 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320892 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320901 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt97q\" (UniqueName: \"kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320912 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320963 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.321089 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.321100 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.321108 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg2m5\" (UniqueName: \"kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.321117 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.321144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs" (OuterVolumeSpecName: "logs") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.323629 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts" (OuterVolumeSpecName: "scripts") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.330728 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.333894 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.341011 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data" (OuterVolumeSpecName: "config-data") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.346526 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.347641 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data" (OuterVolumeSpecName: "config-data") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.355508 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-w7hnn" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.373985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.408024 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="galera" containerID="cri-o://c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.420837 5136 scope.go:117] "RemoveContainer" containerID="fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.421337 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79\": container with ID starting with fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79 not found: ID does not exist" containerID="fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.421382 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79"} err="failed to get container status \"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79\": rpc error: code = NotFound desc = could not find container \"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79\": container with ID starting with fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79 not found: ID does not exist" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.421408 5136 scope.go:117] "RemoveContainer" containerID="8e6a89b054dab23d4263c5fb97b6aba8bc51276e7bd2c8d9be34c61f68879a63" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.422390 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs\") pod \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.422443 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvh5j\" (UniqueName: \"kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j\") pod \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.422603 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle\") pod \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.422662 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data\") pod \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.422710 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs\") pod \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423614 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423646 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423660 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423672 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423689 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423702 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423713 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.424383 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs" (OuterVolumeSpecName: "logs") pod "fba581c3-e77a-4db7-ac50-bdb17291b2c7" (UID: "fba581c3-e77a-4db7-ac50-bdb17291b2c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.472138 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j" (OuterVolumeSpecName: "kube-api-access-jvh5j") pod "fba581c3-e77a-4db7-ac50-bdb17291b2c7" (UID: "fba581c3-e77a-4db7-ac50-bdb17291b2c7"). InnerVolumeSpecName "kube-api-access-jvh5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.479096 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.527340 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.527370 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvh5j\" (UniqueName: \"kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.527502 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.530572 5136 scope.go:117] "RemoveContainer" containerID="62b81b8fc1d95273635ec6d0f69c524950ac024e8fd9b6ac1d7381fe6f428b6f" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.551000 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data" (OuterVolumeSpecName: "config-data") pod "fba581c3-e77a-4db7-ac50-bdb17291b2c7" (UID: "fba581c3-e77a-4db7-ac50-bdb17291b2c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.551117 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fba581c3-e77a-4db7-ac50-bdb17291b2c7" (UID: "fba581c3-e77a-4db7-ac50-bdb17291b2c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.569424 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.595411 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fba581c3-e77a-4db7-ac50-bdb17291b2c7" (UID: "fba581c3-e77a-4db7-ac50-bdb17291b2c7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.619470 5136 scope.go:117] "RemoveContainer" containerID="bd88353eb3bfead6453753b043892b43c76148c22dbdd8749c35d5213cf8d63b" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.619535 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.628647 5136 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.628671 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.628680 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.661992 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.666376 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.710059 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.725089 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.759664 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.779794 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.811378 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.811997 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7db26f77_c83b_4eb6_b513_6b0b2be6ebeb.slice/crio-102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf402e588_3dec_48be_8b5b_5aeaa571b372.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf402e588_3dec_48be_8b5b_5aeaa571b372.slice/crio-0bd47d70acb181d12064205fc44377458fa88b9280bd44fea4624c5f756f1398\": RecentStats: unable to find data in memory cache]" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.815258 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/alertmanager-metric-storage-0" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="alertmanager" probeResult="failure" output="Get \"http://10.217.1.179:9093/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.823442 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.835623 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.847964 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.849189 5136 scope.go:117] "RemoveContainer" containerID="69c8be45f764ed420f7bbef558c7c52b3207d932e0c8d1c5e50585f4ba78387d" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.875878 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.877441 5136 scope.go:117] "RemoveContainer" containerID="9849b01109acdf6259a4119de8fce764d067a8b917a7d5b6965b6bd00e1aa60a" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.879440 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.900049 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.911088 5136 scope.go:117] "RemoveContainer" containerID="b1983019e3cc484fe8f15d4854d502ecd0a69d384bd0f1cd05cd048f9cc159a0" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.920331 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.928794 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.937974 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.953022 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3e97-account-create-update-tkzc8"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.958042 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3e97-account-create-update-tkzc8"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.963646 5136 scope.go:117] "RemoveContainer" containerID="9a6fe348ea134460d09531b2378ad3abce82d81a5457e369dbee025701fbe318" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.971456 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.980518 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.980908 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.985089 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qtl55" event={"ID":"0e905e98-1ffd-4a08-bf51-e89f2d589595","Type":"ContainerStarted","Data":"089c4f1542f895a8c64c404f0f6eac60fa62acf5d0cfe1a1c79aebd6a5cc806f"} Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.988249 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.996142 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: E0320 09:02:58.000944 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 20 09:02:58 crc kubenswrapper[5136]: E0320 09:02:58.001150 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="galera" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.001560 5136 generic.go:334] "Generic (PLEG): container finished" podID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" containerID="aa45ed833f1221b9bb131eddde949a0bc22ff821678a36dab8182db02d897f83" exitCode=2 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.001700 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b","Type":"ContainerDied","Data":"aa45ed833f1221b9bb131eddde949a0bc22ff821678a36dab8182db02d897f83"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.007488 5136 generic.go:334] "Generic (PLEG): container finished" podID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerID="dcd83e13ab91d1b3d212755c407d51e55f888a96ac55ba1a109bfc09166fa35d" exitCode=0 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.007680 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerDied","Data":"dcd83e13ab91d1b3d212755c407d51e55f888a96ac55ba1a109bfc09166fa35d"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.008382 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.018593 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023160 5136 generic.go:334] "Generic (PLEG): container finished" podID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerID="65f0fcd421f6ec548cf9de08170a35a1209f40b76c2fe57dae5b8d4eb78f76fb" exitCode=0 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023342 5136 generic.go:334] "Generic (PLEG): container finished" podID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerID="22214e3addd0a5c3b338ef171790692102833676b69426afd997304cb1243d2d" exitCode=2 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023400 5136 generic.go:334] "Generic (PLEG): container finished" podID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerID="bdb39aa61401fd83441079957dca9d830d4f38dae3c0e327bcfc878794649036" exitCode=0 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023484 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerDied","Data":"65f0fcd421f6ec548cf9de08170a35a1209f40b76c2fe57dae5b8d4eb78f76fb"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023591 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerDied","Data":"22214e3addd0a5c3b338ef171790692102833676b69426afd997304cb1243d2d"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerDied","Data":"bdb39aa61401fd83441079957dca9d830d4f38dae3c0e327bcfc878794649036"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.026930 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.027880 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerDied","Data":"a030a1b8ce6d7dd841b09834af231537b00b2434ef87a4f05635fba547adb80f"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.028050 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.036078 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.046885 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.058174 5136 generic.go:334] "Generic (PLEG): container finished" podID="11508a60-8214-4811-898f-9542eee208d5" containerID="2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" exitCode=0 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.058294 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"11508a60-8214-4811-898f-9542eee208d5","Type":"ContainerDied","Data":"2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.068016 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.167:8000/healthcheck\": read tcp 10.217.0.2:52814->10.217.1.167:8000: read: connection reset by peer" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.068709 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-w7hnn" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.069608 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.069639 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.109606 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7dbf74ffb7-gw5nj" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.166:8004/healthcheck\": read tcp 10.217.0.2:51806->10.217.1.166:8004: read: connection reset by peer" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.146684 5136 scope.go:117] "RemoveContainer" containerID="5e5a77d7952567153e8f93b101532ad12cc95f1597c77efe34c080e974b22447" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.260040 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.264717 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.275957 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.289979 5136 scope.go:117] "RemoveContainer" containerID="23592f2e3f685cf11f8e09b90281731a317e3331b51d973536b5b6cf9ce01a69" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.320377 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345573 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgdmf\" (UniqueName: \"kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf\") pod \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle\") pod \"11508a60-8214-4811-898f-9542eee208d5\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345708 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config\") pod \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345797 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data\") pod \"11508a60-8214-4811-898f-9542eee208d5\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345948 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle\") pod \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345987 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjh89\" (UniqueName: \"kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89\") pod \"11508a60-8214-4811-898f-9542eee208d5\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.346043 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs\") pod \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " Mar 20 09:02:58 crc kubenswrapper[5136]: E0320 09:02:58.346475 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:58 crc kubenswrapper[5136]: E0320 09:02:58.347133 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data podName:e2c9ab46-3143-4472-a606-cd75def78f41 nodeName:}" failed. No retries permitted until 2026-03-20 09:03:06.34711039 +0000 UTC m=+8018.606421561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data") pod "rabbitmq-cell1-server-0" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41") : configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.351458 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.389875 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11508a60-8214-4811-898f-9542eee208d5" (UID: "11508a60-8214-4811-898f-9542eee208d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.390682 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89" (OuterVolumeSpecName: "kube-api-access-xjh89") pod "11508a60-8214-4811-898f-9542eee208d5" (UID: "11508a60-8214-4811-898f-9542eee208d5"). InnerVolumeSpecName "kube-api-access-xjh89". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.390855 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf" (OuterVolumeSpecName: "kube-api-access-hgdmf") pod "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" (UID: "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b"). InnerVolumeSpecName "kube-api-access-hgdmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.403995 5136 scope.go:117] "RemoveContainer" containerID="99aa025dc61faebaa87d0e9d2a4856c44ddf012f862d7c369fe941dabbd9836f" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.424232 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" (UID: "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.440934 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" path="/var/lib/kubelet/pods/0007e89c-1f52-4ac8-beed-59d6db6e60fd/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.441950 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data" (OuterVolumeSpecName: "config-data") pod "11508a60-8214-4811-898f-9542eee208d5" (UID: "11508a60-8214-4811-898f-9542eee208d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.442059 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" (UID: "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.443501 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" path="/var/lib/kubelet/pods/25254bce-daf4-4521-ae48-e6c53e458cb4/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.447452 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="254505fd-2596-4c4a-bf0a-2565e8b3ae5c" path="/var/lib/kubelet/pods/254505fd-2596-4c4a-bf0a-2565e8b3ae5c/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.452794 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28644e17-7977-4824-aa44-364f4558d0ad" path="/var/lib/kubelet/pods/28644e17-7977-4824-aa44-364f4558d0ad/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.453640 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536a487a-ae23-4eed-9bc8-221a9b85bed4" path="/var/lib/kubelet/pods/536a487a-ae23-4eed-9bc8-221a9b85bed4/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.454161 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" path="/var/lib/kubelet/pods/53db9385-e63d-49a6-8dab-854c4bcd01f1/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.454942 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" path="/var/lib/kubelet/pods/6345b1ce-d7d2-420d-8631-e42fd662d790/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.456450 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7660b6b5-094d-4da5-9d34-fe85c863d887" path="/var/lib/kubelet/pods/7660b6b5-094d-4da5-9d34-fe85c863d887/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.457126 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" path="/var/lib/kubelet/pods/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.457547 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.458258 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.458275 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjh89\" (UniqueName: \"kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.458285 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgdmf\" (UniqueName: \"kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.458321 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.458392 5136 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.457972 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8494da27-4688-4c23-b4bd-77a8cac9ae31" path="/var/lib/kubelet/pods/8494da27-4688-4c23-b4bd-77a8cac9ae31/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.460210 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1" path="/var/lib/kubelet/pods/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.460690 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdff16b6-0410-4448-a15c-3f22f5890d91" path="/var/lib/kubelet/pods/bdff16b6-0410-4448-a15c-3f22f5890d91/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.461160 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c532fd14-6718-4c7d-9e38-c68bf7b2da6b" path="/var/lib/kubelet/pods/c532fd14-6718-4c7d-9e38-c68bf7b2da6b/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.461686 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" path="/var/lib/kubelet/pods/d4bc380a-4852-40d3-b03d-67f762c778d3/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.466573 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7881c5-b719-41b0-8046-249f7fdb6f61" path="/var/lib/kubelet/pods/ea7881c5-b719-41b0-8046-249f7fdb6f61/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.471320 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" path="/var/lib/kubelet/pods/f402e588-3dec-48be-8b5b-5aeaa571b372/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.494698 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.500148 5136 scope.go:117] "RemoveContainer" containerID="7ecfa88277a19c2fc4a9782c7beb9c21c6c1a5a38d56723b3e67fc0044f8bbb4" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.503970 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.509729 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3614-account-create-update-w7hnn"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.523059 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" (UID: "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.544182 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3614-account-create-update-w7hnn"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.551375 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.561109 5136 scope.go:117] "RemoveContainer" containerID="9e5b58fa90ab6a9a965276a68d4ee135aa252e61fbc159c5d0aa6f6134637333" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.561673 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts\") pod \"0e905e98-1ffd-4a08-bf51-e89f2d589595\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.561716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwswp\" (UniqueName: \"kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp\") pod \"0e905e98-1ffd-4a08-bf51-e89f2d589595\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.562270 5136 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.563998 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e905e98-1ffd-4a08-bf51-e89f2d589595" (UID: "0e905e98-1ffd-4a08-bf51-e89f2d589595"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.566086 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp" (OuterVolumeSpecName: "kube-api-access-vwswp") pod "0e905e98-1ffd-4a08-bf51-e89f2d589595" (UID: "0e905e98-1ffd-4a08-bf51-e89f2d589595"). InnerVolumeSpecName "kube-api-access-vwswp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.595389 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.611387 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.630027 5136 scope.go:117] "RemoveContainer" containerID="1e2a347a5b7fd1a421ed6a7c665114567c818ec00d533ffab87b5e587c7ecf89" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664121 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664287 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664330 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664516 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664569 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664590 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grml6\" (UniqueName: \"kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.665111 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwswp\" (UniqueName: \"kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.665139 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.668033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6" (OuterVolumeSpecName: "kube-api-access-grml6") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "kube-api-access-grml6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.670150 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.702149 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data" (OuterVolumeSpecName: "config-data") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.711690 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.721401 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.737416 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766658 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766692 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766701 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766709 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766717 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766726 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grml6\" (UniqueName: \"kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.799479 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.835640 5136 scope.go:117] "RemoveContainer" containerID="cd5663a9b617be114b64e32a8582baab8d6015f76d7bc3afb172624a4c98b3c7" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868049 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868175 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868212 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868236 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868275 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrrlc\" (UniqueName: \"kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868340 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.887194 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.887244 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc" (OuterVolumeSpecName: "kube-api-access-vrrlc") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "kube-api-access-vrrlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.916274 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.922894 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.930643 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.944566 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data" (OuterVolumeSpecName: "config-data") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.944587 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.970069 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle\") pod \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.970409 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs\") pod \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.970495 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xz48\" (UniqueName: \"kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48\") pod \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.970879 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config\") pod \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.971120 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data\") pod \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.971537 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "6fcd7752-be4a-45af-b12d-f4ee6275b3b3" (UID: "6fcd7752-be4a-45af-b12d-f4ee6275b3b3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.971573 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data" (OuterVolumeSpecName: "config-data") pod "6fcd7752-be4a-45af-b12d-f4ee6275b3b3" (UID: "6fcd7752-be4a-45af-b12d-f4ee6275b3b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972233 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972318 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrrlc\" (UniqueName: \"kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972352 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972363 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972371 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972379 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972388 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972398 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.975315 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48" (OuterVolumeSpecName: "kube-api-access-9xz48") pod "6fcd7752-be4a-45af-b12d-f4ee6275b3b3" (UID: "6fcd7752-be4a-45af-b12d-f4ee6275b3b3"). InnerVolumeSpecName "kube-api-access-9xz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.005918 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fcd7752-be4a-45af-b12d-f4ee6275b3b3" (UID: "6fcd7752-be4a-45af-b12d-f4ee6275b3b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.027338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "6fcd7752-be4a-45af-b12d-f4ee6275b3b3" (UID: "6fcd7752-be4a-45af-b12d-f4ee6275b3b3"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.073493 5136 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.073524 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xz48\" (UniqueName: \"kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.073534 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.082992 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22659681-bc2b-4056-81d6-96b046e45712/ovn-northd/0.log" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.083055 5136 generic.go:334] "Generic (PLEG): container finished" podID="22659681-bc2b-4056-81d6-96b046e45712" containerID="491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" exitCode=139 Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.083156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerDied","Data":"491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.084259 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qtl55" event={"ID":"0e905e98-1ffd-4a08-bf51-e89f2d589595","Type":"ContainerDied","Data":"089c4f1542f895a8c64c404f0f6eac60fa62acf5d0cfe1a1c79aebd6a5cc806f"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.084339 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.087077 5136 generic.go:334] "Generic (PLEG): container finished" podID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerID="3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5" exitCode=0 Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.087170 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" event={"ID":"25dc915a-6dbf-4622-bd14-1b372cfe9acc","Type":"ContainerDied","Data":"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.087188 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" event={"ID":"25dc915a-6dbf-4622-bd14-1b372cfe9acc","Type":"ContainerDied","Data":"3210b5910d0a16311abb43994ee90a4669047a646cbfeed19742a3d4c20fe707"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.087204 5136 scope.go:117] "RemoveContainer" containerID="3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.087302 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.092382 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b","Type":"ContainerDied","Data":"5287b5546ce2593541455a64057c115d269b5e5f8d4df65c154feacababa85d9"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.092411 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.096696 5136 generic.go:334] "Generic (PLEG): container finished" podID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" containerID="6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53" exitCode=0 Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.096749 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6fcd7752-be4a-45af-b12d-f4ee6275b3b3","Type":"ContainerDied","Data":"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.096769 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6fcd7752-be4a-45af-b12d-f4ee6275b3b3","Type":"ContainerDied","Data":"9cf9cadd89e2b28a829e6e81692bf2693c40f2c59fbdfc4c88536b7ae65a16d3"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.096853 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.104643 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"11508a60-8214-4811-898f-9542eee208d5","Type":"ContainerDied","Data":"a3d117a77c0748e946444e66ceaf44864c92bbd1f42406d2aed51d06c0feb90d"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.104734 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.115048 5136 generic.go:334] "Generic (PLEG): container finished" podID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerID="c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be" exitCode=0 Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.115089 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dbf74ffb7-gw5nj" event={"ID":"d397a968-433e-4de9-8ed7-d0247aa5e775","Type":"ContainerDied","Data":"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.115114 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dbf74ffb7-gw5nj" event={"ID":"d397a968-433e-4de9-8ed7-d0247aa5e775","Type":"ContainerDied","Data":"ca4ec121b137203fb91f384175b7088e11aa189eac35f5c700b19c4a087e9179"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.115178 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.144624 5136 scope.go:117] "RemoveContainer" containerID="3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.148280 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.150595 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5\": container with ID starting with 3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5 not found: ID does not exist" containerID="3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.150634 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5"} err="failed to get container status \"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5\": rpc error: code = NotFound desc = could not find container \"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5\": container with ID starting with 3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5 not found: ID does not exist" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.150660 5136 scope.go:117] "RemoveContainer" containerID="aa45ed833f1221b9bb131eddde949a0bc22ff821678a36dab8182db02d897f83" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.162500 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.171559 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.179494 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.188424 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.202433 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.202981 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d is running failed: container process not found" containerID="491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.204333 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d is running failed: container process not found" containerID="491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.204624 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d is running failed: container process not found" containerID="491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.204655 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="ovn-northd" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.205004 5136 scope.go:117] "RemoveContainer" containerID="6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.229114 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.240417 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.245589 5136 scope.go:117] "RemoveContainer" containerID="6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.247021 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.252909 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53\": container with ID starting with 6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53 not found: ID does not exist" containerID="6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.252949 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53"} err="failed to get container status \"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53\": rpc error: code = NotFound desc = could not find container \"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53\": container with ID starting with 6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53 not found: ID does not exist" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.252974 5136 scope.go:117] "RemoveContainer" containerID="2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.252970 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.259872 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.264982 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.277190 5136 scope.go:117] "RemoveContainer" containerID="c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.307727 5136 scope.go:117] "RemoveContainer" containerID="c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be" Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.308843 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be\": container with ID starting with c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be not found: ID does not exist" containerID="c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.308907 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be"} err="failed to get container status \"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be\": rpc error: code = NotFound desc = could not find container \"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be\": container with ID starting with c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be not found: ID does not exist" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.379141 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22659681-bc2b-4056-81d6-96b046e45712/ovn-northd/0.log" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.379216 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.478792 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479220 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w7g9\" (UniqueName: \"kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479286 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479331 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479360 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479456 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479638 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.481469 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts" (OuterVolumeSpecName: "scripts") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.482874 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config" (OuterVolumeSpecName: "config") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.483098 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.485503 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9" (OuterVolumeSpecName: "kube-api-access-2w7g9") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "kube-api-access-2w7g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.511966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.568015 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.577899 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.579938 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.581969 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582012 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582024 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w7g9\" (UniqueName: \"kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582034 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582043 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582066 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582075 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683026 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zckzm\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683083 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683116 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683138 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683162 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683719 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683751 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683780 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683880 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683915 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683939 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.684090 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.684533 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.684797 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.686352 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.686727 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm" (OuterVolumeSpecName: "kube-api-access-zckzm") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "kube-api-access-zckzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.688658 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.690883 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.690899 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info" (OuterVolumeSpecName: "pod-info") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.704430 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e" (OuterVolumeSpecName: "persistence") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.721950 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data" (OuterVolumeSpecName: "config-data") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.732640 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf" (OuterVolumeSpecName: "server-conf") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.770831 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786888 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786912 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786921 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786931 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zckzm\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786940 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786948 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786957 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786986 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") on node \"crc\" " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786998 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.787007 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.806386 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.806575 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e") on node "crc" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.810860 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.887693 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trz2t\" (UniqueName: \"kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.887791 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.887884 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.887922 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.887955 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.888032 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.888054 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.888490 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.891956 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.892856 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.893328 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.893352 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.893367 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.893410 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.893534 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.894627 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t" (OuterVolumeSpecName: "kube-api-access-trz2t") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "kube-api-access-trz2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.918021 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.981670 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.994172 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e" (OuterVolumeSpecName: "mysql-db") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.009957 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.010047 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data podName:804d1bff-7c63-45a1-bf1a-68f3eedb6ac7 nodeName:}" failed. No retries permitted until 2026-03-20 09:03:08.010026825 +0000 UTC m=+8020.269337976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data") pod "rabbitmq-server-0" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7") : configmap "rabbitmq-config-data" not found Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010129 5136 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010203 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") on node \"crc\" " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010218 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trz2t\" (UniqueName: \"kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010228 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010238 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010247 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.030840 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.030981 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e") on node "crc" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.112485 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.127071 5136 generic.go:334] "Generic (PLEG): container finished" podID="e2c9ab46-3143-4472-a606-cd75def78f41" containerID="a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80" exitCode=0 Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.127131 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerDied","Data":"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.127156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerDied","Data":"25cfdad0d21b0236b303f371c98360bf9fc45a61724844374d71ad0ec3fcc738"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.127174 5136 scope.go:117] "RemoveContainer" containerID="a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.127267 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.132331 5136 generic.go:334] "Generic (PLEG): container finished" podID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" exitCode=0 Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.132409 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerDied","Data":"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.132408 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.132436 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerDied","Data":"a3ca3c82737ff9666b59eaa26c9fcedbcaf8829fd4670afceb4988d0c1b4a157"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.137919 5136 generic.go:334] "Generic (PLEG): container finished" podID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerID="55989c472a0077640a315a8d0db45eb57abfdba8bbd4415c0bb59c9d232cd911" exitCode=0 Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.137998 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerDied","Data":"55989c472a0077640a315a8d0db45eb57abfdba8bbd4415c0bb59c9d232cd911"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.151523 5136 scope.go:117] "RemoveContainer" containerID="ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.155048 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22659681-bc2b-4056-81d6-96b046e45712/ovn-northd/0.log" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.155117 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerDied","Data":"967dc1b56184016d81cec7d8f7dbb1450bc76817bce0736f60753fc52d0a9ea2"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.155205 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.191969 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.205102 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.211828 5136 scope.go:117] "RemoveContainer" containerID="a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80" Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.214643 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80\": container with ID starting with a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80 not found: ID does not exist" containerID="a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.214671 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80"} err="failed to get container status \"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80\": rpc error: code = NotFound desc = could not find container \"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80\": container with ID starting with a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80 not found: ID does not exist" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.214689 5136 scope.go:117] "RemoveContainer" containerID="ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9" Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.214966 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9\": container with ID starting with ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9 not found: ID does not exist" containerID="ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.214987 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9"} err="failed to get container status \"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9\": rpc error: code = NotFound desc = could not find container \"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9\": container with ID starting with ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9 not found: ID does not exist" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.214999 5136 scope.go:117] "RemoveContainer" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.229029 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.245886 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.251705 5136 scope.go:117] "RemoveContainer" containerID="3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.253378 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.264413 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.294635 5136 scope.go:117] "RemoveContainer" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.295232 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe\": container with ID starting with c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe not found: ID does not exist" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.295274 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe"} err="failed to get container status \"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe\": rpc error: code = NotFound desc = could not find container \"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe\": container with ID starting with c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe not found: ID does not exist" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.295305 5136 scope.go:117] "RemoveContainer" containerID="3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77" Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.295786 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77\": container with ID starting with 3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77 not found: ID does not exist" containerID="3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.295841 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77"} err="failed to get container status \"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77\": rpc error: code = NotFound desc = could not find container \"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77\": container with ID starting with 3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77 not found: ID does not exist" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.295859 5136 scope.go:117] "RemoveContainer" containerID="43d8180654ac711b0a6c655f92be552ad6bb0d4e4426596385b695958afa2b74" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.329726 5136 scope.go:117] "RemoveContainer" containerID="491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.393254 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.413693 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e905e98-1ffd-4a08-bf51-e89f2d589595" path="/var/lib/kubelet/pods/0e905e98-1ffd-4a08-bf51-e89f2d589595/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.417576 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11508a60-8214-4811-898f-9542eee208d5" path="/var/lib/kubelet/pods/11508a60-8214-4811-898f-9542eee208d5/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.418740 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22659681-bc2b-4056-81d6-96b046e45712" path="/var/lib/kubelet/pods/22659681-bc2b-4056-81d6-96b046e45712/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.419445 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" path="/var/lib/kubelet/pods/25dc915a-6dbf-4622-bd14-1b372cfe9acc/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.420781 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" path="/var/lib/kubelet/pods/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.422269 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" path="/var/lib/kubelet/pods/6fcd7752-be4a-45af-b12d-f4ee6275b3b3/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.423512 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" path="/var/lib/kubelet/pods/9cf0c76a-c284-44b5-9aee-293de926cb90/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.424672 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" path="/var/lib/kubelet/pods/9fe5d992-c030-4957-8388-763c8fa32d22/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.425887 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" path="/var/lib/kubelet/pods/d397a968-433e-4de9-8ed7-d0247aa5e775/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.427058 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" path="/var/lib/kubelet/pods/e2c9ab46-3143-4472-a606-cd75def78f41/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.428464 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" path="/var/lib/kubelet/pods/fba581c3-e77a-4db7-ac50-bdb17291b2c7/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519639 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519690 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519721 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519741 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519759 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519788 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519836 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfqqz\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519863 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519961 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.520366 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.520389 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.525591 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.526001 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.526062 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz" (OuterVolumeSpecName: "kube-api-access-vfqqz") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "kube-api-access-vfqqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.526519 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.526903 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info" (OuterVolumeSpecName: "pod-info") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.526929 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.530198 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.548536 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0" (OuterVolumeSpecName: "persistence") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.558251 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data" (OuterVolumeSpecName: "config-data") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.582573 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf" (OuterVolumeSpecName: "server-conf") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.598289 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622342 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622393 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfqqz\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622406 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622431 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") on node \"crc\" " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622443 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622452 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622461 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622469 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622478 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622487 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.643214 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.643448 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.643562 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0") on node "crc" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723619 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723702 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723752 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723791 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723835 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723865 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723921 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.724069 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt2qd\" (UniqueName: \"kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.724432 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.724457 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.727803 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd" (OuterVolumeSpecName: "kube-api-access-jt2qd") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "kube-api-access-jt2qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.728396 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.730166 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts" (OuterVolumeSpecName: "scripts") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.730292 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.749226 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data" (OuterVolumeSpecName: "config-data") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.772981 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.789917 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.808123 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825616 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt2qd\" (UniqueName: \"kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825652 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825661 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825672 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825682 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825689 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825698 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825705 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.182162 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerDied","Data":"2628ea0efb2aa853724bc88efed7cc193022127cf5eeb6dafde84a174a83f933"} Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.182216 5136 scope.go:117] "RemoveContainer" containerID="55989c472a0077640a315a8d0db45eb57abfdba8bbd4415c0bb59c9d232cd911" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.182227 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.186123 5136 generic.go:334] "Generic (PLEG): container finished" podID="6492170d-c425-4bc1-8f26-b002ade2a30a" containerID="0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b" exitCode=0 Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.186156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69dd969bf5-bw8cr" event={"ID":"6492170d-c425-4bc1-8f26-b002ade2a30a","Type":"ContainerDied","Data":"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b"} Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.186188 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69dd969bf5-bw8cr" event={"ID":"6492170d-c425-4bc1-8f26-b002ade2a30a","Type":"ContainerDied","Data":"8f7c1605d17f9d449c5a1d9a15decff286543015c063129a09c0f97fced38720"} Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.186202 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.192917 5136 generic.go:334] "Generic (PLEG): container finished" podID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerID="7e50b645c14d1546ec6ea5f4cc398a09d244fd42ce33f42ece0b4ffe6904f5e2" exitCode=0 Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.192957 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerDied","Data":"7e50b645c14d1546ec6ea5f4cc398a09d244fd42ce33f42ece0b4ffe6904f5e2"} Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.213621 5136 scope.go:117] "RemoveContainer" containerID="9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.232964 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.245961 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.254037 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.259737 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.272602 5136 scope.go:117] "RemoveContainer" containerID="0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.291923 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.307375 5136 scope.go:117] "RemoveContainer" containerID="0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b" Mar 20 09:03:01 crc kubenswrapper[5136]: E0320 09:03:01.308116 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b\": container with ID starting with 0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b not found: ID does not exist" containerID="0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.308164 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b"} err="failed to get container status \"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b\": rpc error: code = NotFound desc = could not find container \"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b\": container with ID starting with 0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b not found: ID does not exist" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334437 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334604 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334693 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334743 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334941 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.342792 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792" (OuterVolumeSpecName: "kube-api-access-hf792") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "kube-api-access-hf792". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.343322 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts" (OuterVolumeSpecName: "scripts") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.373346 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.377385 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.405447 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.411306 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data" (OuterVolumeSpecName: "config-data") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.436677 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.436858 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.436951 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.437011 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.437074 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.437134 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.207145 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerDied","Data":"c5f3a5b62a724af9b3292dfbea60cc84cb5ca65e111a8f8018f79664063a08d4"} Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.207199 5136 scope.go:117] "RemoveContainer" containerID="7e50b645c14d1546ec6ea5f4cc398a09d244fd42ce33f42ece0b4ffe6904f5e2" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.207239 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.263302 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.266381 5136 scope.go:117] "RemoveContainer" containerID="dcd83e13ab91d1b3d212755c407d51e55f888a96ac55ba1a109bfc09166fa35d" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.274173 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.290390 5136 scope.go:117] "RemoveContainer" containerID="2ead91f10403f4d804be86964d57e08ade2602b5155572d57113b707313fe0a4" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.313345 5136 scope.go:117] "RemoveContainer" containerID="7cef857842ff9d2b9ec6fba6fced2a4a47da1ca6826d17831d4379411662258d" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.406179 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" path="/var/lib/kubelet/pods/040731fb-85ee-40ac-9ea2-3627a5f48766/volumes" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.407144 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6492170d-c425-4bc1-8f26-b002ade2a30a" path="/var/lib/kubelet/pods/6492170d-c425-4bc1-8f26-b002ade2a30a/volumes" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.407770 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" path="/var/lib/kubelet/pods/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7/volumes" Mar 20 09:03:03 crc kubenswrapper[5136]: I0320 09:03:03.375788 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 09:03:03 crc kubenswrapper[5136]: I0320 09:03:03.376319 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" containerName="adoption" containerID="cri-o://a92d1246191abb0ecac747ee2b70c76f1c8719ff40ea6ea858ab04c5eaa88bb1" gracePeriod=30 Mar 20 09:03:03 crc kubenswrapper[5136]: I0320 09:03:03.660190 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Mar 20 09:03:03 crc kubenswrapper[5136]: I0320 09:03:03.660422 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="c068d291-989b-4247-8cee-0596033c8ce5" containerName="adoption" containerID="cri-o://b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386" gracePeriod=30 Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.239525 5136 generic.go:334] "Generic (PLEG): container finished" podID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerID="d9e0d70b1ab5d2268043ec21cc179228d03e79f0c594fbabbe78f8b02d15cad9" exitCode=0 Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.239581 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerDied","Data":"d9e0d70b1ab5d2268043ec21cc179228d03e79f0c594fbabbe78f8b02d15cad9"} Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.315068 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.387177 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.387242 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.387272 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.387320 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.387414 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj2tp\" (UniqueName: \"kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.388065 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.388447 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.388513 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.388559 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.388921 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.389149 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.405242 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts" (OuterVolumeSpecName: "scripts") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.437168 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp" (OuterVolumeSpecName: "kube-api-access-lj2tp") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "kube-api-access-lj2tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.486965 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.489781 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.489806 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.489831 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj2tp\" (UniqueName: \"kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.489842 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.534982 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.542017 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.575372 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data" (OuterVolumeSpecName: "config-data") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.591574 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.591607 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.591616 5136 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.251287 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerDied","Data":"5dafeee7b076cbe9bc559d7bf9a4dfd78115dffe5bc845019a4034a5c5f12b09"} Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.251571 5136 scope.go:117] "RemoveContainer" containerID="65f0fcd421f6ec548cf9de08170a35a1209f40b76c2fe57dae5b8d4eb78f76fb" Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.251366 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.282766 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.294602 5136 scope.go:117] "RemoveContainer" containerID="22214e3addd0a5c3b338ef171790692102833676b69426afd997304cb1243d2d" Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.296782 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.313867 5136 scope.go:117] "RemoveContainer" containerID="d9e0d70b1ab5d2268043ec21cc179228d03e79f0c594fbabbe78f8b02d15cad9" Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.331058 5136 scope.go:117] "RemoveContainer" containerID="bdb39aa61401fd83441079957dca9d830d4f38dae3c0e327bcfc878794649036" Mar 20 09:03:05 crc kubenswrapper[5136]: E0320 09:03:05.590120 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:05 crc kubenswrapper[5136]: E0320 09:03:05.592099 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:05 crc kubenswrapper[5136]: E0320 09:03:05.593226 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:05 crc kubenswrapper[5136]: E0320 09:03:05.593293 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:03:06 crc kubenswrapper[5136]: I0320 09:03:06.414386 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" path="/var/lib/kubelet/pods/7dbff142-083b-40b7-a0d7-3f17fa9810e3/volumes" Mar 20 09:03:06 crc kubenswrapper[5136]: I0320 09:03:06.778941 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8443: connect: connection refused" Mar 20 09:03:08 crc kubenswrapper[5136]: E0320 09:03:08.016483 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305f3f22_2f38_44c5_8e63_1f028edce331.slice/crio-conmon-293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a.scope\": RecentStats: unable to find data in memory cache]" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.278076 5136 generic.go:334] "Generic (PLEG): container finished" podID="305f3f22-2f38-44c5-8e63-1f028edce331" containerID="293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a" exitCode=0 Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.278141 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerDied","Data":"293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a"} Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.339145 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444285 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444458 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444534 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444561 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444598 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444689 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn4qs\" (UniqueName: \"kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.457015 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.457040 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs" (OuterVolumeSpecName: "kube-api-access-rn4qs") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "kube-api-access-rn4qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.481974 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.483740 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.486426 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config" (OuterVolumeSpecName: "config") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.488126 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.503115 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553765 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553811 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553849 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553858 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553868 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn4qs\" (UniqueName: \"kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553877 5136 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553885 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.288864 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerDied","Data":"98850495a9f03337c375a82dd9c14c9acf7e4cb2584c596cc8791bfade8a3bf0"} Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.288907 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.288934 5136 scope.go:117] "RemoveContainer" containerID="b54ae1c896c24440630a7756d526255f3def96dbed5cb096fc4d77997e706367" Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.324811 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.326594 5136 scope.go:117] "RemoveContainer" containerID="293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a" Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.329973 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 09:03:10 crc kubenswrapper[5136]: I0320 09:03:10.429260 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" path="/var/lib/kubelet/pods/305f3f22-2f38-44c5-8e63-1f028edce331/volumes" Mar 20 09:03:15 crc kubenswrapper[5136]: E0320 09:03:15.590646 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:15 crc kubenswrapper[5136]: E0320 09:03:15.593806 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:15 crc kubenswrapper[5136]: E0320 09:03:15.595541 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:15 crc kubenswrapper[5136]: E0320 09:03:15.595667 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:03:15 crc kubenswrapper[5136]: I0320 09:03:15.822003 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:03:15 crc kubenswrapper[5136]: I0320 09:03:15.822062 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:03:16 crc kubenswrapper[5136]: I0320 09:03:16.779966 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8443: connect: connection refused" Mar 20 09:03:16 crc kubenswrapper[5136]: I0320 09:03:16.780133 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 09:03:21 crc kubenswrapper[5136]: I0320 09:03:21.898520 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059320 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slc4h\" (UniqueName: \"kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059415 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059493 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059533 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059573 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059615 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059684 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.060642 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs" (OuterVolumeSpecName: "logs") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.066619 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h" (OuterVolumeSpecName: "kube-api-access-slc4h") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "kube-api-access-slc4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.067490 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.091132 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.091249 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data" (OuterVolumeSpecName: "config-data") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.101092 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts" (OuterVolumeSpecName: "scripts") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.107495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161626 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161656 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161666 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161675 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161685 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161693 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161702 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slc4h\" (UniqueName: \"kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.365497 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.409254 5136 generic.go:334] "Generic (PLEG): container finished" podID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerID="635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa" exitCode=137 Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.409352 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.411831 5136 generic.go:334] "Generic (PLEG): container finished" podID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerID="49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77" exitCode=137 Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.411910 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.412548 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerDied","Data":"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa"} Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.412703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerDied","Data":"a94c046fe10c34e65503024040ef0cdbc5574ea11bd41c94fb47af937848986b"} Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.412800 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerDied","Data":"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77"} Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.412922 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerDied","Data":"902c62cf1140494c6f31a74b7c931afaffaa08d6e8a6315048461c8df99fb197"} Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.413020 5136 scope.go:117] "RemoveContainer" containerID="5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.446655 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.452641 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465405 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465458 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465503 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465533 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465636 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465707 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mw7n\" (UniqueName: \"kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465765 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.466084 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.470030 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.470064 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n" (OuterVolumeSpecName: "kube-api-access-5mw7n") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "kube-api-access-5mw7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.470086 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts" (OuterVolumeSpecName: "scripts") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.499612 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.545797 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data" (OuterVolumeSpecName: "config-data") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.568053 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.568089 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.568104 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.568115 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.568128 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mw7n\" (UniqueName: \"kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.576529 5136 scope.go:117] "RemoveContainer" containerID="635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.632210 5136 scope.go:117] "RemoveContainer" containerID="5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d" Mar 20 09:03:22 crc kubenswrapper[5136]: E0320 09:03:22.632922 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d\": container with ID starting with 5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d not found: ID does not exist" containerID="5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.633009 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d"} err="failed to get container status \"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d\": rpc error: code = NotFound desc = could not find container \"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d\": container with ID starting with 5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d not found: ID does not exist" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.633061 5136 scope.go:117] "RemoveContainer" containerID="635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa" Mar 20 09:03:22 crc kubenswrapper[5136]: E0320 09:03:22.633510 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa\": container with ID starting with 635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa not found: ID does not exist" containerID="635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.633613 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa"} err="failed to get container status \"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa\": rpc error: code = NotFound desc = could not find container \"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa\": container with ID starting with 635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa not found: ID does not exist" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.633659 5136 scope.go:117] "RemoveContainer" containerID="b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.656589 5136 scope.go:117] "RemoveContainer" containerID="49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.672275 5136 scope.go:117] "RemoveContainer" containerID="b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a" Mar 20 09:03:22 crc kubenswrapper[5136]: E0320 09:03:22.672563 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a\": container with ID starting with b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a not found: ID does not exist" containerID="b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.672591 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a"} err="failed to get container status \"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a\": rpc error: code = NotFound desc = could not find container \"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a\": container with ID starting with b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a not found: ID does not exist" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.672609 5136 scope.go:117] "RemoveContainer" containerID="49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77" Mar 20 09:03:22 crc kubenswrapper[5136]: E0320 09:03:22.672913 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77\": container with ID starting with 49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77 not found: ID does not exist" containerID="49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.672938 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77"} err="failed to get container status \"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77\": rpc error: code = NotFound desc = could not find container \"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77\": container with ID starting with 49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77 not found: ID does not exist" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.749093 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.757676 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 09:03:24 crc kubenswrapper[5136]: I0320 09:03:24.417731 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" path="/var/lib/kubelet/pods/1da401a4-384d-4911-bf25-0aa4c544fd0d/volumes" Mar 20 09:03:24 crc kubenswrapper[5136]: I0320 09:03:24.419726 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" path="/var/lib/kubelet/pods/7be786a7-1dee-4cfb-bada-4883a9326c71/volumes" Mar 20 09:03:25 crc kubenswrapper[5136]: E0320 09:03:25.590605 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:25 crc kubenswrapper[5136]: E0320 09:03:25.591785 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:25 crc kubenswrapper[5136]: E0320 09:03:25.593463 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:25 crc kubenswrapper[5136]: E0320 09:03:25.593508 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.548587 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226","Type":"ContainerDied","Data":"a92d1246191abb0ecac747ee2b70c76f1c8719ff40ea6ea858ab04c5eaa88bb1"} Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.548739 5136 generic.go:334] "Generic (PLEG): container finished" podID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" containerID="a92d1246191abb0ecac747ee2b70c76f1c8719ff40ea6ea858ab04c5eaa88bb1" exitCode=137 Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.848868 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.978960 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") pod \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.979059 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc9kw\" (UniqueName: \"kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw\") pod \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.990139 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw" (OuterVolumeSpecName: "kube-api-access-cc9kw") pod "30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" (UID: "30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226"). InnerVolumeSpecName "kube-api-access-cc9kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.992922 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf" (OuterVolumeSpecName: "mariadb-data") pod "30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" (UID: "30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226"). InnerVolumeSpecName "pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.015214 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.080373 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc9kw\" (UniqueName: \"kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.080433 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") on node \"crc\" " Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.095578 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.095794 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf") on node "crc" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.181716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkx6j\" (UniqueName: \"kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j\") pod \"c068d291-989b-4247-8cee-0596033c8ce5\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.181849 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert\") pod \"c068d291-989b-4247-8cee-0596033c8ce5\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.182852 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") pod \"c068d291-989b-4247-8cee-0596033c8ce5\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.183328 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.185025 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j" (OuterVolumeSpecName: "kube-api-access-mkx6j") pod "c068d291-989b-4247-8cee-0596033c8ce5" (UID: "c068d291-989b-4247-8cee-0596033c8ce5"). InnerVolumeSpecName "kube-api-access-mkx6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.189071 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "c068d291-989b-4247-8cee-0596033c8ce5" (UID: "c068d291-989b-4247-8cee-0596033c8ce5"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.202486 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72" (OuterVolumeSpecName: "ovn-data") pod "c068d291-989b-4247-8cee-0596033c8ce5" (UID: "c068d291-989b-4247-8cee-0596033c8ce5"). InnerVolumeSpecName "pvc-f678964a-3590-4064-b82f-274887925e72". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.284789 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkx6j\" (UniqueName: \"kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.285102 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.285304 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") on node \"crc\" " Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.303231 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.303505 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f678964a-3590-4064-b82f-274887925e72" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72") on node "crc" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.388360 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.572540 5136 generic.go:334] "Generic (PLEG): container finished" podID="c068d291-989b-4247-8cee-0596033c8ce5" containerID="b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386" exitCode=137 Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.572578 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.572643 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c068d291-989b-4247-8cee-0596033c8ce5","Type":"ContainerDied","Data":"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386"} Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.572674 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c068d291-989b-4247-8cee-0596033c8ce5","Type":"ContainerDied","Data":"e581eb3896caa8dce4da5d70ae2539c97df467c09420153c45b9ba77109b2e63"} Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.572713 5136 scope.go:117] "RemoveContainer" containerID="b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.576533 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226","Type":"ContainerDied","Data":"2c2faf3df1acecb9c43fb4e3dfa1b1bce7305d443462043dbca7203ee15e6fb8"} Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.576605 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.601128 5136 scope.go:117] "RemoveContainer" containerID="b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.603019 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 09:03:34 crc kubenswrapper[5136]: E0320 09:03:34.604567 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386\": container with ID starting with b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386 not found: ID does not exist" containerID="b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.604632 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386"} err="failed to get container status \"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386\": rpc error: code = NotFound desc = could not find container \"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386\": container with ID starting with b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386 not found: ID does not exist" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.604655 5136 scope.go:117] "RemoveContainer" containerID="a92d1246191abb0ecac747ee2b70c76f1c8719ff40ea6ea858ab04c5eaa88bb1" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.613980 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.620585 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.626414 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Mar 20 09:03:35 crc kubenswrapper[5136]: E0320 09:03:35.589624 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:35 crc kubenswrapper[5136]: E0320 09:03:35.591509 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:35 crc kubenswrapper[5136]: E0320 09:03:35.592848 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:35 crc kubenswrapper[5136]: E0320 09:03:35.592877 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:03:36 crc kubenswrapper[5136]: I0320 09:03:36.406842 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" path="/var/lib/kubelet/pods/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226/volumes" Mar 20 09:03:36 crc kubenswrapper[5136]: I0320 09:03:36.407396 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c068d291-989b-4247-8cee-0596033c8ce5" path="/var/lib/kubelet/pods/c068d291-989b-4247-8cee-0596033c8ce5/volumes" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.516266 5136 scope.go:117] "RemoveContainer" containerID="be01a18339108a324f38d8991f30133c5afcdbbf8536a6fc62d20def93a4fe70" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.588919 5136 scope.go:117] "RemoveContainer" containerID="036a56dc8be2b1448ecd4eaee7ae6cfc9fce54b35893d14784a5e2a194d245a2" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.619712 5136 scope.go:117] "RemoveContainer" containerID="f6175692bafced85aff7b1e4e0d62331fe62665eef24726a05ac9691debbacde" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.655970 5136 scope.go:117] "RemoveContainer" containerID="bd3d02ee4935523ab4eb4492588717b04d2271f1f22be17fbab8ebb01a7e4c49" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.677735 5136 scope.go:117] "RemoveContainer" containerID="a2cd799ad38f20f3a20df188a90ca9d10f639dafb3f002a582a1fe8b8331c153" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.709902 5136 scope.go:117] "RemoveContainer" containerID="249161d201c86c824c827618ff63c208e5d4f7836f10cac1b975be51340fe2bc" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.750215 5136 scope.go:117] "RemoveContainer" containerID="5df9d903ae57ec8baad2fe6c51be0e13f0c8a558bfc5471ea6ef07feb8e164f7" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.778253 5136 scope.go:117] "RemoveContainer" containerID="ed9ea6d3f8369f00e748cdbd6f737c4f4b838eb8db7325e29aaf558dc66f2d6f" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.832690 5136 scope.go:117] "RemoveContainer" containerID="bbdf8dabf0d2951b09ae3f63cdd6eda3f6af581fbac68d093607e16820b73e60" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.871388 5136 scope.go:117] "RemoveContainer" containerID="0596189127fdfe0bb4f8c43c9a281f3d0d01a460eb398984e9cddcf692a4beaa" Mar 20 09:03:45 crc kubenswrapper[5136]: E0320 09:03:45.590269 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:45 crc kubenswrapper[5136]: E0320 09:03:45.591948 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:45 crc kubenswrapper[5136]: E0320 09:03:45.593280 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:45 crc kubenswrapper[5136]: E0320 09:03:45.593310 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:03:45 crc kubenswrapper[5136]: I0320 09:03:45.822430 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:03:45 crc kubenswrapper[5136]: I0320 09:03:45.822746 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:03:45 crc kubenswrapper[5136]: I0320 09:03:45.822948 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 09:03:45 crc kubenswrapper[5136]: I0320 09:03:45.823781 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 09:03:45 crc kubenswrapper[5136]: I0320 09:03:45.823980 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b" gracePeriod=600 Mar 20 09:03:46 crc kubenswrapper[5136]: I0320 09:03:46.695567 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b" exitCode=0 Mar 20 09:03:46 crc kubenswrapper[5136]: I0320 09:03:46.695608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b"} Mar 20 09:03:46 crc kubenswrapper[5136]: I0320 09:03:46.695867 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51"} Mar 20 09:03:46 crc kubenswrapper[5136]: I0320 09:03:46.695887 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.146441 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.331350 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle\") pod \"53ac16e5-846e-40c1-a361-0815d231345a\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.331403 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom\") pod \"53ac16e5-846e-40c1-a361-0815d231345a\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.331453 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhkvs\" (UniqueName: \"kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs\") pod \"53ac16e5-846e-40c1-a361-0815d231345a\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.331511 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data\") pod \"53ac16e5-846e-40c1-a361-0815d231345a\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.341072 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "53ac16e5-846e-40c1-a361-0815d231345a" (UID: "53ac16e5-846e-40c1-a361-0815d231345a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.341489 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs" (OuterVolumeSpecName: "kube-api-access-vhkvs") pod "53ac16e5-846e-40c1-a361-0815d231345a" (UID: "53ac16e5-846e-40c1-a361-0815d231345a"). InnerVolumeSpecName "kube-api-access-vhkvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.355340 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53ac16e5-846e-40c1-a361-0815d231345a" (UID: "53ac16e5-846e-40c1-a361-0815d231345a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.382220 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data" (OuterVolumeSpecName: "config-data") pod "53ac16e5-846e-40c1-a361-0815d231345a" (UID: "53ac16e5-846e-40c1-a361-0815d231345a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.435008 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.435066 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.435089 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.435110 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhkvs\" (UniqueName: \"kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.778517 5136 generic.go:334] "Generic (PLEG): container finished" podID="53ac16e5-846e-40c1-a361-0815d231345a" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" exitCode=137 Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.778582 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.778609 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7659754fcd-klwkv" event={"ID":"53ac16e5-846e-40c1-a361-0815d231345a","Type":"ContainerDied","Data":"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6"} Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.778656 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7659754fcd-klwkv" event={"ID":"53ac16e5-846e-40c1-a361-0815d231345a","Type":"ContainerDied","Data":"d50b7d1e3cf945251d9601fa11e23c7c876806ca081f20f39fb0f6c33187004b"} Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.778681 5136 scope.go:117] "RemoveContainer" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.809315 5136 scope.go:117] "RemoveContainer" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" Mar 20 09:03:55 crc kubenswrapper[5136]: E0320 09:03:55.810065 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6\": container with ID starting with 72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6 not found: ID does not exist" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.810124 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6"} err="failed to get container status \"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6\": rpc error: code = NotFound desc = could not find container \"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6\": container with ID starting with 72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6 not found: ID does not exist" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.839732 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.879558 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 09:03:56 crc kubenswrapper[5136]: I0320 09:03:56.407896 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ac16e5-846e-40c1-a361-0815d231345a" path="/var/lib/kubelet/pods/53ac16e5-846e-40c1-a361-0815d231345a/volumes" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160436 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566624-n9gpj"] Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160788 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c068d291-989b-4247-8cee-0596033c8ce5" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160803 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c068d291-989b-4247-8cee-0596033c8ce5" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160831 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160840 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160853 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" containerName="kube-state-metrics" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160862 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" containerName="kube-state-metrics" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160880 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="setup-container" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160887 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="setup-container" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160901 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160908 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-api" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160918 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="setup-container" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160926 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="setup-container" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160936 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="galera" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160944 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="galera" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160957 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="ovn-northd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160963 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="ovn-northd" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160972 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160978 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160990 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160997 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-api" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161008 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6492170d-c425-4bc1-8f26-b002ade2a30a" containerName="keystone-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161014 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6492170d-c425-4bc1-8f26-b002ade2a30a" containerName="keystone-api" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161024 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-central-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161032 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-central-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161042 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11508a60-8214-4811-898f-9542eee208d5" containerName="nova-cell0-conductor-conductor" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161049 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="11508a60-8214-4811-898f-9542eee208d5" containerName="nova-cell0-conductor-conductor" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161058 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="probe" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161066 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="probe" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161077 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161083 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161095 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="sg-core" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161102 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="sg-core" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161111 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-notifier" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161117 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-notifier" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161129 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" containerName="memcached" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161136 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" containerName="memcached" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161147 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="openstack-network-exporter" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161154 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="openstack-network-exporter" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161162 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="cinder-scheduler" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161169 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="cinder-scheduler" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161182 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161189 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161203 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerName="heat-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161210 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerName="heat-api" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161221 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-notification-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161228 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-notification-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161237 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-log" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161244 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-log" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161254 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerName="heat-cfnapi" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161261 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerName="heat-cfnapi" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161272 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="proxy-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161279 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="proxy-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161289 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-evaluator" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161296 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-evaluator" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161305 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161312 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161325 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon-log" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161332 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon-log" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161344 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-metadata" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161350 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-metadata" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161362 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="mysql-bootstrap" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161369 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="mysql-bootstrap" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161385 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-listener" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161392 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-listener" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161404 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161412 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161565 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6492170d-c425-4bc1-8f26-b002ade2a30a" containerName="keystone-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161583 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="openstack-network-exporter" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161597 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161611 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161625 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-log" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161639 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" containerName="kube-state-metrics" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161651 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161661 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161672 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-notification-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161685 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-metadata" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161697 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="11508a60-8214-4811-898f-9542eee208d5" containerName="nova-cell0-conductor-conductor" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161712 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c068d291-989b-4247-8cee-0596033c8ce5" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161721 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-listener" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161732 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerName="heat-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161744 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-central-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161757 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-notifier" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161772 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="galera" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161784 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161795 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerName="heat-cfnapi" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161808 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon-log" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161837 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="proxy-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161847 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" containerName="memcached" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161854 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-evaluator" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161865 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161875 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="ovn-northd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161883 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="sg-core" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161891 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161902 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161913 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="cinder-scheduler" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161923 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="probe" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.162526 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.169756 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.170037 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.170653 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.188594 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566624-n9gpj"] Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.212156 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6mlc\" (UniqueName: \"kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc\") pod \"auto-csr-approver-29566624-n9gpj\" (UID: \"2c69bbe6-8752-4d39-b2e4-2eab9134dbda\") " pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.313741 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6mlc\" (UniqueName: \"kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc\") pod \"auto-csr-approver-29566624-n9gpj\" (UID: \"2c69bbe6-8752-4d39-b2e4-2eab9134dbda\") " pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.338283 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6mlc\" (UniqueName: \"kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc\") pod \"auto-csr-approver-29566624-n9gpj\" (UID: \"2c69bbe6-8752-4d39-b2e4-2eab9134dbda\") " pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.489267 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.916798 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566624-n9gpj"] Mar 20 09:04:01 crc kubenswrapper[5136]: I0320 09:04:01.834419 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" event={"ID":"2c69bbe6-8752-4d39-b2e4-2eab9134dbda","Type":"ContainerStarted","Data":"8abd98fdbcc90c77025204d2045d2c3addc2296a4b751024a75339d7473623f5"} Mar 20 09:04:02 crc kubenswrapper[5136]: I0320 09:04:02.846865 5136 generic.go:334] "Generic (PLEG): container finished" podID="2c69bbe6-8752-4d39-b2e4-2eab9134dbda" containerID="35e33276bd939043cf0f403b9a2e455c0ebe9937a874e7a190c199a2c2c31266" exitCode=0 Mar 20 09:04:02 crc kubenswrapper[5136]: I0320 09:04:02.846956 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" event={"ID":"2c69bbe6-8752-4d39-b2e4-2eab9134dbda","Type":"ContainerDied","Data":"35e33276bd939043cf0f403b9a2e455c0ebe9937a874e7a190c199a2c2c31266"} Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.246053 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.263977 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6mlc\" (UniqueName: \"kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc\") pod \"2c69bbe6-8752-4d39-b2e4-2eab9134dbda\" (UID: \"2c69bbe6-8752-4d39-b2e4-2eab9134dbda\") " Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.270128 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc" (OuterVolumeSpecName: "kube-api-access-l6mlc") pod "2c69bbe6-8752-4d39-b2e4-2eab9134dbda" (UID: "2c69bbe6-8752-4d39-b2e4-2eab9134dbda"). InnerVolumeSpecName "kube-api-access-l6mlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.365575 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6mlc\" (UniqueName: \"kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc\") on node \"crc\" DevicePath \"\"" Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.861898 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" event={"ID":"2c69bbe6-8752-4d39-b2e4-2eab9134dbda","Type":"ContainerDied","Data":"8abd98fdbcc90c77025204d2045d2c3addc2296a4b751024a75339d7473623f5"} Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.861978 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8abd98fdbcc90c77025204d2045d2c3addc2296a4b751024a75339d7473623f5" Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.861920 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:05 crc kubenswrapper[5136]: I0320 09:04:05.307974 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566618-mxdtq"] Mar 20 09:04:05 crc kubenswrapper[5136]: I0320 09:04:05.315378 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566618-mxdtq"] Mar 20 09:04:06 crc kubenswrapper[5136]: I0320 09:04:06.407063 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b36af1-10a6-412b-a488-892560533fbc" path="/var/lib/kubelet/pods/c9b36af1-10a6-412b-a488-892560533fbc/volumes" Mar 20 09:04:43 crc kubenswrapper[5136]: I0320 09:04:43.571485 5136 scope.go:117] "RemoveContainer" containerID="95c081e05a9b5bcdeb5bad36239bef981a1b982b216877bab99876ec036176fc" Mar 20 09:04:43 crc kubenswrapper[5136]: I0320 09:04:43.641726 5136 scope.go:117] "RemoveContainer" containerID="a3cfce9ddd036c7cf63f0412122dd66290e1e5696f18bc8cd0c8f3ee086aeaaa" Mar 20 09:04:43 crc kubenswrapper[5136]: I0320 09:04:43.659100 5136 scope.go:117] "RemoveContainer" containerID="c48b0b35287dd607ba880a1efbf0d79170312ce43d17777e16e04be5b17bbe8a" Mar 20 09:04:43 crc kubenswrapper[5136]: I0320 09:04:43.681512 5136 scope.go:117] "RemoveContainer" containerID="20f8a9a945087915c09ba9f6c5bb3fad1e06a23db6077bf675c7ee359a2b9ea4" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.396636 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ppsnr/must-gather-8lzbv"] Mar 20 09:04:57 crc kubenswrapper[5136]: E0320 09:04:57.397715 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c69bbe6-8752-4d39-b2e4-2eab9134dbda" containerName="oc" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.397733 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c69bbe6-8752-4d39-b2e4-2eab9134dbda" containerName="oc" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.397943 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c69bbe6-8752-4d39-b2e4-2eab9134dbda" containerName="oc" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.398942 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.400545 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ppsnr"/"default-dockercfg-w9w8v" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.401035 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ppsnr"/"openshift-service-ca.crt" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.402052 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ppsnr"/"kube-root-ca.crt" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.406210 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ppsnr/must-gather-8lzbv"] Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.474656 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qnw8\" (UniqueName: \"kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.475068 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.576728 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qnw8\" (UniqueName: \"kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.577113 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.577684 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.599611 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qnw8\" (UniqueName: \"kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.715950 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.990773 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ppsnr/must-gather-8lzbv"] Mar 20 09:04:58 crc kubenswrapper[5136]: I0320 09:04:58.336584 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" event={"ID":"ca8086a5-288f-4e6b-80ae-07842239f3a9","Type":"ContainerStarted","Data":"cf163eeeb83d64d0e119dfe030e24a3a07d52a3c1734c5a472356d78c486daa0"} Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.183052 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-n92lq"] Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.184377 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.271529 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szvfx\" (UniqueName: \"kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.271592 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.373472 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szvfx\" (UniqueName: \"kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.373525 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.373672 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.386397 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" event={"ID":"ca8086a5-288f-4e6b-80ae-07842239f3a9","Type":"ContainerStarted","Data":"908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307"} Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.386450 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" event={"ID":"ca8086a5-288f-4e6b-80ae-07842239f3a9","Type":"ContainerStarted","Data":"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32"} Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.393096 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szvfx\" (UniqueName: \"kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.413552 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" podStartSLOduration=1.949763029 podStartE2EDuration="7.413527618s" podCreationTimestamp="2026-03-20 09:04:57 +0000 UTC" firstStartedPulling="2026-03-20 09:04:57.989969044 +0000 UTC m=+8130.249280195" lastFinishedPulling="2026-03-20 09:05:03.453733633 +0000 UTC m=+8135.713044784" observedRunningTime="2026-03-20 09:05:04.400173255 +0000 UTC m=+8136.659484426" watchObservedRunningTime="2026-03-20 09:05:04.413527618 +0000 UTC m=+8136.672838769" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.499493 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: W0320 09:05:04.519955 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca05797c_feba_4bf1_969a_cf8268c5416e.slice/crio-552a1a2261ff463a0a4ef09c426c0b600d1a196e6a631809f2c20dc0c7273d62 WatchSource:0}: Error finding container 552a1a2261ff463a0a4ef09c426c0b600d1a196e6a631809f2c20dc0c7273d62: Status 404 returned error can't find the container with id 552a1a2261ff463a0a4ef09c426c0b600d1a196e6a631809f2c20dc0c7273d62 Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.522474 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 09:05:05 crc kubenswrapper[5136]: I0320 09:05:05.394085 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" event={"ID":"ca05797c-feba-4bf1-969a-cf8268c5416e","Type":"ContainerStarted","Data":"552a1a2261ff463a0a4ef09c426c0b600d1a196e6a631809f2c20dc0c7273d62"} Mar 20 09:05:16 crc kubenswrapper[5136]: I0320 09:05:16.492959 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" event={"ID":"ca05797c-feba-4bf1-969a-cf8268c5416e","Type":"ContainerStarted","Data":"cc5de32d3b2f3f78d2de7c2c04c373c815b2dc521f0ac4c71c7880cc929ccffe"} Mar 20 09:05:16 crc kubenswrapper[5136]: I0320 09:05:16.507398 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" podStartSLOduration=1.324022177 podStartE2EDuration="12.507381048s" podCreationTimestamp="2026-03-20 09:05:04 +0000 UTC" firstStartedPulling="2026-03-20 09:05:04.522170832 +0000 UTC m=+8136.781481983" lastFinishedPulling="2026-03-20 09:05:15.705529703 +0000 UTC m=+8147.964840854" observedRunningTime="2026-03-20 09:05:16.505772449 +0000 UTC m=+8148.765083600" watchObservedRunningTime="2026-03-20 09:05:16.507381048 +0000 UTC m=+8148.766692199" Mar 20 09:05:31 crc kubenswrapper[5136]: I0320 09:05:31.595092 5136 generic.go:334] "Generic (PLEG): container finished" podID="ca05797c-feba-4bf1-969a-cf8268c5416e" containerID="cc5de32d3b2f3f78d2de7c2c04c373c815b2dc521f0ac4c71c7880cc929ccffe" exitCode=0 Mar 20 09:05:31 crc kubenswrapper[5136]: I0320 09:05:31.595173 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" event={"ID":"ca05797c-feba-4bf1-969a-cf8268c5416e","Type":"ContainerDied","Data":"cc5de32d3b2f3f78d2de7c2c04c373c815b2dc521f0ac4c71c7880cc929ccffe"} Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.684585 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.706873 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-n92lq"] Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.711974 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-n92lq"] Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.787788 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host\") pod \"ca05797c-feba-4bf1-969a-cf8268c5416e\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.787906 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szvfx\" (UniqueName: \"kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx\") pod \"ca05797c-feba-4bf1-969a-cf8268c5416e\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.787953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host" (OuterVolumeSpecName: "host") pod "ca05797c-feba-4bf1-969a-cf8268c5416e" (UID: "ca05797c-feba-4bf1-969a-cf8268c5416e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.788185 5136 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host\") on node \"crc\" DevicePath \"\"" Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.793423 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx" (OuterVolumeSpecName: "kube-api-access-szvfx") pod "ca05797c-feba-4bf1-969a-cf8268c5416e" (UID: "ca05797c-feba-4bf1-969a-cf8268c5416e"). InnerVolumeSpecName "kube-api-access-szvfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.889428 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szvfx\" (UniqueName: \"kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx\") on node \"crc\" DevicePath \"\"" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.610010 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="552a1a2261ff463a0a4ef09c426c0b600d1a196e6a631809f2c20dc0c7273d62" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.610064 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.891316 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-z5cnm"] Mar 20 09:05:33 crc kubenswrapper[5136]: E0320 09:05:33.892036 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca05797c-feba-4bf1-969a-cf8268c5416e" containerName="container-00" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.892057 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca05797c-feba-4bf1-969a-cf8268c5416e" containerName="container-00" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.892264 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca05797c-feba-4bf1-969a-cf8268c5416e" containerName="container-00" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.892844 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.004561 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhjh4\" (UniqueName: \"kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.004629 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.106017 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhjh4\" (UniqueName: \"kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.106062 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.106140 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.124475 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhjh4\" (UniqueName: \"kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.206667 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.408418 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca05797c-feba-4bf1-969a-cf8268c5416e" path="/var/lib/kubelet/pods/ca05797c-feba-4bf1-969a-cf8268c5416e/volumes" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.619002 5136 generic.go:334] "Generic (PLEG): container finished" podID="7d1cda92-e6b4-4955-9a82-884d297123e2" containerID="17b0f5de8a279485b6aaca68e2060dc79cee0d217678711cb4707b777c79489a" exitCode=1 Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.619061 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" event={"ID":"7d1cda92-e6b4-4955-9a82-884d297123e2","Type":"ContainerDied","Data":"17b0f5de8a279485b6aaca68e2060dc79cee0d217678711cb4707b777c79489a"} Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.619104 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" event={"ID":"7d1cda92-e6b4-4955-9a82-884d297123e2","Type":"ContainerStarted","Data":"ca22aabf1901c5227b218433ec0a9c1ba4336b9293d038311e7defcd4b3d56b4"} Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.669175 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-z5cnm"] Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.677330 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-z5cnm"] Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.696548 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.829485 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host\") pod \"7d1cda92-e6b4-4955-9a82-884d297123e2\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.829580 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhjh4\" (UniqueName: \"kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4\") pod \"7d1cda92-e6b4-4955-9a82-884d297123e2\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.829579 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host" (OuterVolumeSpecName: "host") pod "7d1cda92-e6b4-4955-9a82-884d297123e2" (UID: "7d1cda92-e6b4-4955-9a82-884d297123e2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.829899 5136 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host\") on node \"crc\" DevicePath \"\"" Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.839040 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4" (OuterVolumeSpecName: "kube-api-access-lhjh4") pod "7d1cda92-e6b4-4955-9a82-884d297123e2" (UID: "7d1cda92-e6b4-4955-9a82-884d297123e2"). InnerVolumeSpecName "kube-api-access-lhjh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.931400 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhjh4\" (UniqueName: \"kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4\") on node \"crc\" DevicePath \"\"" Mar 20 09:05:36 crc kubenswrapper[5136]: I0320 09:05:36.405780 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1cda92-e6b4-4955-9a82-884d297123e2" path="/var/lib/kubelet/pods/7d1cda92-e6b4-4955-9a82-884d297123e2/volumes" Mar 20 09:05:36 crc kubenswrapper[5136]: I0320 09:05:36.633501 5136 scope.go:117] "RemoveContainer" containerID="17b0f5de8a279485b6aaca68e2060dc79cee0d217678711cb4707b777c79489a" Mar 20 09:05:36 crc kubenswrapper[5136]: I0320 09:05:36.633552 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:43 crc kubenswrapper[5136]: I0320 09:05:43.756736 5136 scope.go:117] "RemoveContainer" containerID="bbe438fbefca46d6264b55b57938c854859588f624d630a720b3f84f596f758f" Mar 20 09:05:43 crc kubenswrapper[5136]: I0320 09:05:43.777019 5136 scope.go:117] "RemoveContainer" containerID="456904b2c9b2b9b900c280b5851fc80a26454d886df3722f5e23e7c54d551d62" Mar 20 09:05:43 crc kubenswrapper[5136]: I0320 09:05:43.818455 5136 scope.go:117] "RemoveContainer" containerID="3a4f7e90e6acf592c84321f18597eeea1a3c43546eaea8fecf69996c5f79ba99" Mar 20 09:05:58 crc kubenswrapper[5136]: I0320 09:05:58.489325 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/openstack-network-exporter/0.log" Mar 20 09:05:58 crc kubenswrapper[5136]: I0320 09:05:58.673431 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/ovsdbserver-nb/0.log" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.136333 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566626-v5t2c"] Mar 20 09:06:00 crc kubenswrapper[5136]: E0320 09:06:00.137097 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1cda92-e6b4-4955-9a82-884d297123e2" containerName="container-00" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.137116 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1cda92-e6b4-4955-9a82-884d297123e2" containerName="container-00" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.137277 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d1cda92-e6b4-4955-9a82-884d297123e2" containerName="container-00" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.137786 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.140614 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.141027 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.141431 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.150292 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566626-v5t2c"] Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.159642 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ng6\" (UniqueName: \"kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6\") pod \"auto-csr-approver-29566626-v5t2c\" (UID: \"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0\") " pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.261200 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7ng6\" (UniqueName: \"kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6\") pod \"auto-csr-approver-29566626-v5t2c\" (UID: \"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0\") " pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.279357 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7ng6\" (UniqueName: \"kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6\") pod \"auto-csr-approver-29566626-v5t2c\" (UID: \"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0\") " pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.459595 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.896007 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566626-v5t2c"] Mar 20 09:06:01 crc kubenswrapper[5136]: I0320 09:06:01.831441 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" event={"ID":"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0","Type":"ContainerStarted","Data":"66cb93f635706732da486bcfe852832e612a1ff6f0de83d8d85a923f2934f09e"} Mar 20 09:06:02 crc kubenswrapper[5136]: I0320 09:06:02.840390 5136 generic.go:334] "Generic (PLEG): container finished" podID="c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" containerID="61bbb832cea68f4a2623689f58cca96024659f79b778725dbd4701abef2ee9eb" exitCode=0 Mar 20 09:06:02 crc kubenswrapper[5136]: I0320 09:06:02.840445 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" event={"ID":"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0","Type":"ContainerDied","Data":"61bbb832cea68f4a2623689f58cca96024659f79b778725dbd4701abef2ee9eb"} Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.109353 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.218693 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7ng6\" (UniqueName: \"kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6\") pod \"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0\" (UID: \"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0\") " Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.223132 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6" (OuterVolumeSpecName: "kube-api-access-h7ng6") pod "c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" (UID: "c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0"). InnerVolumeSpecName "kube-api-access-h7ng6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.320268 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7ng6\" (UniqueName: \"kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6\") on node \"crc\" DevicePath \"\"" Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.856703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" event={"ID":"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0","Type":"ContainerDied","Data":"66cb93f635706732da486bcfe852832e612a1ff6f0de83d8d85a923f2934f09e"} Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.857287 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66cb93f635706732da486bcfe852832e612a1ff6f0de83d8d85a923f2934f09e" Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.856764 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:05 crc kubenswrapper[5136]: I0320 09:06:05.179401 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566620-sh7c8"] Mar 20 09:06:05 crc kubenswrapper[5136]: I0320 09:06:05.186346 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566620-sh7c8"] Mar 20 09:06:06 crc kubenswrapper[5136]: I0320 09:06:06.407729 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" path="/var/lib/kubelet/pods/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91/volumes" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.109008 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/util/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.338337 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/util/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.348594 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/pull/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.348661 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/pull/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.529261 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/util/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.577271 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/pull/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.614255 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/extract/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.760751 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59bc569d95-5lz5s_86f2c200-3fc8-4ff8-abbd-4e9196951c84/manager/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.949626 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-588d4d986b-nzs5m_0454e048-0e5f-454d-a341-627512f745b9/manager/0.log" Mar 20 09:06:14 crc kubenswrapper[5136]: I0320 09:06:14.208804 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-79df6bcc97-4zc57_d9bea0a5-4e0c-4eec-8c57-465238459ec5/manager/0.log" Mar 20 09:06:14 crc kubenswrapper[5136]: I0320 09:06:14.467908 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-67dd5f86f5-j7rd5_98ee6d09-7d19-49ff-af63-3f24c4bbf6de/manager/0.log" Mar 20 09:06:14 crc kubenswrapper[5136]: I0320 09:06:14.670309 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-8464cc45fb-jqkmw_ce8f650c-1729-4d5d-ae70-6cefed6ebe33/manager/0.log" Mar 20 09:06:14 crc kubenswrapper[5136]: I0320 09:06:14.932574 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-cvwqk_8035ac49-bf5e-4c7a-801a-2e0a9acdbec8/manager/0.log" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.351935 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-768b96df4c-9vwxq_86ae10c6-6dff-4cac-a399-e03bd4de7134/manager/0.log" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.522321 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7b9c774f96-rpqlj_fad403b0-ff16-4bfe-a0e3-8f0da431260b/manager/0.log" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.577580 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-55f864c847-wz6kw_0688d3df-a125-4d57-9699-a87d92b140fa/manager/0.log" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.821474 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.821526 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.855893 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-8g592_2f2fc86c-b42c-4fd9-94e6-817ed073035d/manager/0.log" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.900063 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67ccfc9778-w497x_84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6/manager/0.log" Mar 20 09:06:16 crc kubenswrapper[5136]: I0320 09:06:16.297169 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-sshvb_e85f51ac-f1e1-4299-91a6-9b27dcc50967/manager/0.log" Mar 20 09:06:16 crc kubenswrapper[5136]: I0320 09:06:16.362619 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d488d59fb-rdkrz_9b7da04b-f73c-4838-978d-34e4665f3963/manager/0.log" Mar 20 09:06:16 crc kubenswrapper[5136]: I0320 09:06:16.663186 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d58dc466-g62fh_95dfc6ea-897c-4133-ab1e-cefc81ab0623/manager/0.log" Mar 20 09:06:16 crc kubenswrapper[5136]: I0320 09:06:16.676001 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-74c4796899556xf_10cd2a26-beca-4a3b-a791-83cc8cc451ab/manager/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.063627 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-b85c4d696-xv6qc_eb51f1ec-5289-4291-8334-0149c355adac/operator/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.172987 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-w8k22_4c933e5d-73ac-4820-a31c-e1d5cc5bcae0/registry-server/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.445421 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-884679f54-pdmtp_67cd41a3-e91f-4d51-b79a-61d697bbf646/manager/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.519380 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5784578c99-58pk7_527edb93-1d3a-45f7-a7c9-f9e28fb6f713/manager/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.729886 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vlngd_3dcb58f9-ad42-41ad-af27-2ca462257e77/operator/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.805845 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-c674c5965-jmsnc_8129ebe9-8537-403e-9c32-835f54b5d878/manager/0.log" Mar 20 09:06:18 crc kubenswrapper[5136]: I0320 09:06:18.299745 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-v4npm_547cee69-3d64-49aa-8e95-c19be2bb3089/manager/0.log" Mar 20 09:06:18 crc kubenswrapper[5136]: I0320 09:06:18.487057 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d6b694c5-qwtfr_489b4c0d-9288-4e00-84ac-23fb05767840/manager/0.log" Mar 20 09:06:18 crc kubenswrapper[5136]: I0320 09:06:18.644356 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c4d75f7f9-xp6jw_f50bceb5-4fe7-4eba-a9a2-e40f6c89583a/manager/0.log" Mar 20 09:06:18 crc kubenswrapper[5136]: I0320 09:06:18.703075 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86bd8996f6-5rlp5_9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8/manager/0.log" Mar 20 09:06:38 crc kubenswrapper[5136]: I0320 09:06:38.312704 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-j6ffq_dd410106-c7b7-4706-9b99-38e3597ee713/control-plane-machine-set-operator/0.log" Mar 20 09:06:38 crc kubenswrapper[5136]: I0320 09:06:38.492613 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vbjpm_a3ca072d-707e-4c94-9b3a-81eabc72f840/kube-rbac-proxy/0.log" Mar 20 09:06:38 crc kubenswrapper[5136]: I0320 09:06:38.541489 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vbjpm_a3ca072d-707e-4c94-9b3a-81eabc72f840/machine-api-operator/0.log" Mar 20 09:06:43 crc kubenswrapper[5136]: I0320 09:06:43.918087 5136 scope.go:117] "RemoveContainer" containerID="d46abf622d038618ca2e56c8ba50c8df50e7f199364c722c3c72d53324ea811a" Mar 20 09:06:45 crc kubenswrapper[5136]: I0320 09:06:45.821845 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:06:45 crc kubenswrapper[5136]: I0320 09:06:45.822193 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:06:50 crc kubenswrapper[5136]: I0320 09:06:50.648411 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-d4w65_b06e6b2d-fcba-4ba1-9ba1-82585032b382/cert-manager-controller/0.log" Mar 20 09:06:50 crc kubenswrapper[5136]: I0320 09:06:50.843421 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-4757p_f1c160ca-0866-46ab-859c-8557dc65e962/cert-manager-cainjector/0.log" Mar 20 09:06:50 crc kubenswrapper[5136]: I0320 09:06:50.939126 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-4l568_6168deec-ad68-4f6d-9736-422a6c7ade08/cert-manager-webhook/0.log" Mar 20 09:07:02 crc kubenswrapper[5136]: I0320 09:07:02.786110 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-rsxkf_3bdd0e88-cfa4-410a-b619-7918a813120d/nmstate-console-plugin/0.log" Mar 20 09:07:02 crc kubenswrapper[5136]: I0320 09:07:02.941028 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7bqsc_43a9811e-7a36-4f11-9f02-ac3e4c00c42d/nmstate-handler/0.log" Mar 20 09:07:02 crc kubenswrapper[5136]: I0320 09:07:02.996960 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-dxl94_21fd222d-3101-4c49-bbca-611916a57ae8/nmstate-metrics/0.log" Mar 20 09:07:03 crc kubenswrapper[5136]: I0320 09:07:03.000204 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-dxl94_21fd222d-3101-4c49-bbca-611916a57ae8/kube-rbac-proxy/0.log" Mar 20 09:07:03 crc kubenswrapper[5136]: I0320 09:07:03.133537 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-mzffz_94018849-bf2a-47b4-be05-5e9ff0e0dfbd/nmstate-operator/0.log" Mar 20 09:07:03 crc kubenswrapper[5136]: I0320 09:07:03.213253 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-k7799_a6f3f958-ebef-4d11-be1e-1cd2d431006c/nmstate-webhook/0.log" Mar 20 09:07:15 crc kubenswrapper[5136]: I0320 09:07:15.821960 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:07:15 crc kubenswrapper[5136]: I0320 09:07:15.822479 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:07:15 crc kubenswrapper[5136]: I0320 09:07:15.822524 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 09:07:15 crc kubenswrapper[5136]: I0320 09:07:15.823173 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 09:07:15 crc kubenswrapper[5136]: I0320 09:07:15.823228 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" gracePeriod=600 Mar 20 09:07:15 crc kubenswrapper[5136]: E0320 09:07:15.944308 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.234690 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-pw7kx_b1998fd9-5100-4819-83d9-61c453df2121/prometheus-operator/0.log" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.362760 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" exitCode=0 Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.362801 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51"} Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.362847 5136 scope.go:117] "RemoveContainer" containerID="052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.363362 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:07:16 crc kubenswrapper[5136]: E0320 09:07:16.363649 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.441200 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k_6e3b7b66-720f-451e-b76c-d14672876450/prometheus-operator-admission-webhook/0.log" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.441830 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg_e648d436-8985-4d18-83b2-8401e5e3b301/prometheus-operator-admission-webhook/0.log" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.612954 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-7pqgh_cbf95789-daee-44bb-9d6a-a5b503c0b1e1/operator/0.log" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.638499 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-7979496b84-bg2n6_0e3c2d08-6905-419d-a0d6-f4935119b632/perses-operator/0.log" Mar 20 09:07:28 crc kubenswrapper[5136]: I0320 09:07:28.420579 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:07:28 crc kubenswrapper[5136]: E0320 09:07:28.421292 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.188748 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-dzzhq_4c981a48-1ae6-4c06-90ed-4333de6a14d2/kube-rbac-proxy/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.407125 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-frr-files/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.606685 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-reloader/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.635655 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-metrics/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.666180 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-frr-files/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.788471 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-dzzhq_4c981a48-1ae6-4c06-90ed-4333de6a14d2/controller/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.812203 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-reloader/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.995929 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-frr-files/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.998719 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-metrics/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.009263 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-reloader/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.019995 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-metrics/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.171244 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-frr-files/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.200915 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/controller/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.209297 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-metrics/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.210128 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-reloader/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.372137 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/kube-rbac-proxy-frr/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.415321 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/kube-rbac-proxy/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.449535 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/frr-metrics/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.592547 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/reloader/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.711708 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-b8fzm_037785f1-4827-4473-8997-20cdc8fec776/frr-k8s-webhook-server/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.926201 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76dc698dd8-wkrqn_8738cb21-39f9-4eeb-90fc-f512d95642f3/manager/0.log" Mar 20 09:07:31 crc kubenswrapper[5136]: I0320 09:07:31.057760 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-787f65f959-lkczj_f9ad7722-3864-444d-92a1-235de7707fe4/webhook-server/0.log" Mar 20 09:07:31 crc kubenswrapper[5136]: I0320 09:07:31.124102 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nrftr_d54436ca-ad6f-41c2-ae88-703f150229fc/kube-rbac-proxy/0.log" Mar 20 09:07:32 crc kubenswrapper[5136]: I0320 09:07:32.036917 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nrftr_d54436ca-ad6f-41c2-ae88-703f150229fc/speaker/0.log" Mar 20 09:07:33 crc kubenswrapper[5136]: I0320 09:07:33.713492 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/frr/0.log" Mar 20 09:07:43 crc kubenswrapper[5136]: I0320 09:07:43.397512 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:07:43 crc kubenswrapper[5136]: E0320 09:07:43.398417 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:07:43 crc kubenswrapper[5136]: I0320 09:07:43.983743 5136 scope.go:117] "RemoveContainer" containerID="e9acc33cb6ef33f971afc8e98aee5abda02ae0be42cbb3e0b4beb36ffafb1e4d" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.039445 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.227484 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/pull/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.255464 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.301892 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/pull/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.463494 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.493113 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/pull/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.505477 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/extract/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.629589 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.798958 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/pull/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.826485 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/pull/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.841412 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.990491 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.995167 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/extract/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.008440 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/pull/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.154753 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/util/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.318939 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/util/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.323906 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/pull/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.365563 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/pull/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.517185 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/extract/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.530333 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/pull/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.559881 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/util/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.678573 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/util/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.902642 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/util/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.904241 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/pull/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.920474 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/pull/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.060010 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/util/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.084385 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/pull/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.104901 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/extract/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.231106 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-utilities/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.406669 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-content/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.448297 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-content/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.469740 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-utilities/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.785467 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-utilities/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.812099 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-content/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.057858 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-utilities/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.174131 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-content/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.233488 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-utilities/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.313793 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-content/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.449596 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-utilities/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.482670 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-content/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.740774 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sl2lb_37de93ad-331e-41ee-8f74-523100e01b09/marketplace-operator/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.752437 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/registry-server/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.881266 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/registry-server/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.899955 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.048352 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.060635 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-content/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.074948 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-content/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.265283 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-content/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.274856 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.340605 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.523163 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-content/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.587869 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.588028 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/registry-server/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.596019 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-content/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.716356 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.720577 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-content/0.log" Mar 20 09:07:49 crc kubenswrapper[5136]: I0320 09:07:49.667453 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/registry-server/0.log" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.294490 5136 kuberuntime_container.go:700] "PreStop hook not completed in grace period" pod="openstack/ovsdbserver-nb-2" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" containerID="cri-o://f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5" gracePeriod=300 Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.294886 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-2" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" containerID="cri-o://f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5" gracePeriod=2 Mar 20 09:07:51 crc kubenswrapper[5136]: E0320 09:07:51.306557 5136 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 20 09:07:51 crc kubenswrapper[5136]: command '/usr/local/bin/container-scripts/cleanup.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/cleanup.sh Mar 20 09:07:51 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_TYPE=nb Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_FILE=/etc/ovn/ovnnb_db.db Mar 20 09:07:51 crc kubenswrapper[5136]: + DB_NAME=OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ nb == \s\b ]] Mar 20 09:07:51 crc kubenswrapper[5136]: ++ hostname Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ ovsdbserver-nb-2 != \o\v\s\d\b\s\e\r\v\e\r\-\n\b\-\0 ]] Mar 20 09:07:51 crc kubenswrapper[5136]: + ovs-appctl -t /tmp/ovnnb_db.ctl cluster/leave OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: > execCommand=["/usr/local/bin/container-scripts/cleanup.sh"] containerName="ovsdbserver-nb" pod="openstack/ovsdbserver-nb-2" message=< Mar 20 09:07:51 crc kubenswrapper[5136]: ++ dirname /usr/local/bin/container-scripts/cleanup.sh Mar 20 09:07:51 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_TYPE=nb Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_FILE=/etc/ovn/ovnnb_db.db Mar 20 09:07:51 crc kubenswrapper[5136]: + DB_NAME=OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ nb == \s\b ]] Mar 20 09:07:51 crc kubenswrapper[5136]: ++ hostname Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ ovsdbserver-nb-2 != \o\v\s\d\b\s\e\r\v\e\r\-\n\b\-\0 ]] Mar 20 09:07:51 crc kubenswrapper[5136]: + ovs-appctl -t /tmp/ovnnb_db.ctl cluster/leave OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: > Mar 20 09:07:51 crc kubenswrapper[5136]: E0320 09:07:51.306958 5136 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 20 09:07:51 crc kubenswrapper[5136]: command '/usr/local/bin/container-scripts/cleanup.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/cleanup.sh Mar 20 09:07:51 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_TYPE=nb Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_FILE=/etc/ovn/ovnnb_db.db Mar 20 09:07:51 crc kubenswrapper[5136]: + DB_NAME=OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ nb == \s\b ]] Mar 20 09:07:51 crc kubenswrapper[5136]: ++ hostname Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ ovsdbserver-nb-2 != \o\v\s\d\b\s\e\r\v\e\r\-\n\b\-\0 ]] Mar 20 09:07:51 crc kubenswrapper[5136]: + ovs-appctl -t /tmp/ovnnb_db.ctl cluster/leave OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: > pod="openstack/ovsdbserver-nb-2" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" containerID="cri-o://f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.611851 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/ovsdbserver-nb/0.log" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.611907 5136 generic.go:334] "Generic (PLEG): container finished" podID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerID="f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5" exitCode=143 Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.611941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerDied","Data":"f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5"} Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.766575 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/ovsdbserver-nb/0.log" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.766932 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.936848 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.936931 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.936974 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937587 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937713 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937767 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937809 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937864 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts" (OuterVolumeSpecName: "scripts") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937897 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937933 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hznvw\" (UniqueName: \"kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.938106 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config" (OuterVolumeSpecName: "config") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.938317 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.938362 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.938376 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.942712 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw" (OuterVolumeSpecName: "kube-api-access-hznvw") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "kube-api-access-hznvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.948124 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.956664 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.985180 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.994781 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.039738 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.039856 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") on node \"crc\" " Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.039876 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.039891 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.039902 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hznvw\" (UniqueName: \"kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.072651 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.072847 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7") on node "crc" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.141125 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.622183 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/ovsdbserver-nb/0.log" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.622245 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerDied","Data":"3ed0d6ce099ff1f367dca760d285f74c76c709803b8a996aaf1abb5243af3234"} Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.622290 5136 scope.go:117] "RemoveContainer" containerID="aa7d63dd9f5d69196ca03f337b7c8a99ee8f9a0db5fd272afae4861760ebba16" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.622370 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.650533 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.656642 5136 scope.go:117] "RemoveContainer" containerID="f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.658667 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 09:07:54 crc kubenswrapper[5136]: I0320 09:07:54.396594 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:07:54 crc kubenswrapper[5136]: E0320 09:07:54.397175 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:07:54 crc kubenswrapper[5136]: I0320 09:07:54.408567 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" path="/var/lib/kubelet/pods/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/volumes" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.136439 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566628-l9k2s"] Mar 20 09:08:00 crc kubenswrapper[5136]: E0320 09:08:00.137579 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137595 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" Mar 20 09:08:00 crc kubenswrapper[5136]: E0320 09:08:00.137621 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="openstack-network-exporter" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137631 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="openstack-network-exporter" Mar 20 09:08:00 crc kubenswrapper[5136]: E0320 09:08:00.137646 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" containerName="oc" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137654 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" containerName="oc" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137829 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137857 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" containerName="oc" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137879 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="openstack-network-exporter" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.138434 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.141064 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.141114 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.141303 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.148390 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566628-l9k2s"] Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.250019 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrk2s\" (UniqueName: \"kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s\") pod \"auto-csr-approver-29566628-l9k2s\" (UID: \"795d143d-8524-469f-bf25-830fe5e73bce\") " pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.351293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrk2s\" (UniqueName: \"kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s\") pod \"auto-csr-approver-29566628-l9k2s\" (UID: \"795d143d-8524-469f-bf25-830fe5e73bce\") " pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.375117 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrk2s\" (UniqueName: \"kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s\") pod \"auto-csr-approver-29566628-l9k2s\" (UID: \"795d143d-8524-469f-bf25-830fe5e73bce\") " pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.455913 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.774397 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k_6e3b7b66-720f-451e-b76c-d14672876450/prometheus-operator-admission-webhook/0.log" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.804153 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-pw7kx_b1998fd9-5100-4819-83d9-61c453df2121/prometheus-operator/0.log" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.807002 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg_e648d436-8985-4d18-83b2-8401e5e3b301/prometheus-operator-admission-webhook/0.log" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.894047 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566628-l9k2s"] Mar 20 09:08:01 crc kubenswrapper[5136]: I0320 09:08:01.008513 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-7pqgh_cbf95789-daee-44bb-9d6a-a5b503c0b1e1/operator/0.log" Mar 20 09:08:01 crc kubenswrapper[5136]: I0320 09:08:01.058473 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-7979496b84-bg2n6_0e3c2d08-6905-419d-a0d6-f4935119b632/perses-operator/0.log" Mar 20 09:08:01 crc kubenswrapper[5136]: I0320 09:08:01.681422 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" event={"ID":"795d143d-8524-469f-bf25-830fe5e73bce","Type":"ContainerStarted","Data":"e818a2d0d5694696160c1421e3fb3394393ab01522e8e65fc73d2d3a566735fb"} Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.151608 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.153803 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.176565 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.297967 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.298082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.298115 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn2sw\" (UniqueName: \"kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.399785 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.399887 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn2sw\" (UniqueName: \"kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.399979 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.400527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.400587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.425510 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn2sw\" (UniqueName: \"kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.479334 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.702669 5136 generic.go:334] "Generic (PLEG): container finished" podID="795d143d-8524-469f-bf25-830fe5e73bce" containerID="ee5576ee1e2eb8b6c2ee48065627daac8c0e821b5702496a43cfbcccf410f4a1" exitCode=0 Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.702759 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" event={"ID":"795d143d-8524-469f-bf25-830fe5e73bce","Type":"ContainerDied","Data":"ee5576ee1e2eb8b6c2ee48065627daac8c0e821b5702496a43cfbcccf410f4a1"} Mar 20 09:08:04 crc kubenswrapper[5136]: I0320 09:08:04.014800 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:04 crc kubenswrapper[5136]: W0320 09:08:04.019275 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbfd87b8_0fff_41eb_a772_a9481ded678f.slice/crio-0dc5ae11dd0443b183d8eb55a27c7c31a97362c70a8bed1b33bd896a54bff997 WatchSource:0}: Error finding container 0dc5ae11dd0443b183d8eb55a27c7c31a97362c70a8bed1b33bd896a54bff997: Status 404 returned error can't find the container with id 0dc5ae11dd0443b183d8eb55a27c7c31a97362c70a8bed1b33bd896a54bff997 Mar 20 09:08:04 crc kubenswrapper[5136]: I0320 09:08:04.715024 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerID="a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8" exitCode=0 Mar 20 09:08:04 crc kubenswrapper[5136]: I0320 09:08:04.715279 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerDied","Data":"a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8"} Mar 20 09:08:04 crc kubenswrapper[5136]: I0320 09:08:04.715698 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerStarted","Data":"0dc5ae11dd0443b183d8eb55a27c7c31a97362c70a8bed1b33bd896a54bff997"} Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.033638 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.224229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrk2s\" (UniqueName: \"kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s\") pod \"795d143d-8524-469f-bf25-830fe5e73bce\" (UID: \"795d143d-8524-469f-bf25-830fe5e73bce\") " Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.230887 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s" (OuterVolumeSpecName: "kube-api-access-zrk2s") pod "795d143d-8524-469f-bf25-830fe5e73bce" (UID: "795d143d-8524-469f-bf25-830fe5e73bce"). InnerVolumeSpecName "kube-api-access-zrk2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.326920 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrk2s\" (UniqueName: \"kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.397758 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:08:05 crc kubenswrapper[5136]: E0320 09:08:05.398288 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.733830 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerStarted","Data":"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d"} Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.735421 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" event={"ID":"795d143d-8524-469f-bf25-830fe5e73bce","Type":"ContainerDied","Data":"e818a2d0d5694696160c1421e3fb3394393ab01522e8e65fc73d2d3a566735fb"} Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.735471 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e818a2d0d5694696160c1421e3fb3394393ab01522e8e65fc73d2d3a566735fb" Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.735500 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:06 crc kubenswrapper[5136]: I0320 09:08:06.128659 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566622-tbs2b"] Mar 20 09:08:06 crc kubenswrapper[5136]: I0320 09:08:06.137522 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566622-tbs2b"] Mar 20 09:08:06 crc kubenswrapper[5136]: I0320 09:08:06.406373 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb9dd63-3112-441b-961e-b61a752527d8" path="/var/lib/kubelet/pods/eeb9dd63-3112-441b-961e-b61a752527d8/volumes" Mar 20 09:08:06 crc kubenswrapper[5136]: I0320 09:08:06.746612 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerID="08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d" exitCode=0 Mar 20 09:08:06 crc kubenswrapper[5136]: I0320 09:08:06.746691 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerDied","Data":"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d"} Mar 20 09:08:07 crc kubenswrapper[5136]: I0320 09:08:07.757209 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerStarted","Data":"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46"} Mar 20 09:08:07 crc kubenswrapper[5136]: I0320 09:08:07.786933 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j2szc" podStartSLOduration=2.231745033 podStartE2EDuration="4.786916864s" podCreationTimestamp="2026-03-20 09:08:03 +0000 UTC" firstStartedPulling="2026-03-20 09:08:04.718309177 +0000 UTC m=+8316.977620328" lastFinishedPulling="2026-03-20 09:08:07.273480998 +0000 UTC m=+8319.532792159" observedRunningTime="2026-03-20 09:08:07.782533688 +0000 UTC m=+8320.041844869" watchObservedRunningTime="2026-03-20 09:08:07.786916864 +0000 UTC m=+8320.046228005" Mar 20 09:08:13 crc kubenswrapper[5136]: I0320 09:08:13.479698 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:13 crc kubenswrapper[5136]: I0320 09:08:13.482305 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:13 crc kubenswrapper[5136]: I0320 09:08:13.525390 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:13 crc kubenswrapper[5136]: I0320 09:08:13.842806 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:13 crc kubenswrapper[5136]: I0320 09:08:13.884491 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:15 crc kubenswrapper[5136]: I0320 09:08:15.817993 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j2szc" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="registry-server" containerID="cri-o://8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46" gracePeriod=2 Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.186481 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:16 crc kubenswrapper[5136]: E0320 09:08:16.187174 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795d143d-8524-469f-bf25-830fe5e73bce" containerName="oc" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.187202 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="795d143d-8524-469f-bf25-830fe5e73bce" containerName="oc" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.187472 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="795d143d-8524-469f-bf25-830fe5e73bce" containerName="oc" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.189101 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.203062 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.234374 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.358879 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content\") pod \"fbfd87b8-0fff-41eb-a772-a9481ded678f\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.359006 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn2sw\" (UniqueName: \"kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw\") pod \"fbfd87b8-0fff-41eb-a772-a9481ded678f\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.359056 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities\") pod \"fbfd87b8-0fff-41eb-a772-a9481ded678f\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.359842 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities" (OuterVolumeSpecName: "utilities") pod "fbfd87b8-0fff-41eb-a772-a9481ded678f" (UID: "fbfd87b8-0fff-41eb-a772-a9481ded678f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.360073 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.360161 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.360187 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t29w4\" (UniqueName: \"kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.360355 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.365318 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw" (OuterVolumeSpecName: "kube-api-access-xn2sw") pod "fbfd87b8-0fff-41eb-a772-a9481ded678f" (UID: "fbfd87b8-0fff-41eb-a772-a9481ded678f"). InnerVolumeSpecName "kube-api-access-xn2sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.388263 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbfd87b8-0fff-41eb-a772-a9481ded678f" (UID: "fbfd87b8-0fff-41eb-a772-a9481ded678f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.462372 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.462532 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.462581 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t29w4\" (UniqueName: \"kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.462730 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.462754 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn2sw\" (UniqueName: \"kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.463527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.463547 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.482053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t29w4\" (UniqueName: \"kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.562908 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.834882 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerID="8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46" exitCode=0 Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.834934 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerDied","Data":"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46"} Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.834964 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerDied","Data":"0dc5ae11dd0443b183d8eb55a27c7c31a97362c70a8bed1b33bd896a54bff997"} Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.834991 5136 scope.go:117] "RemoveContainer" containerID="8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.836102 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.869076 5136 scope.go:117] "RemoveContainer" containerID="08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.870104 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.878459 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.885606 5136 scope.go:117] "RemoveContainer" containerID="a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.902144 5136 scope.go:117] "RemoveContainer" containerID="8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46" Mar 20 09:08:16 crc kubenswrapper[5136]: E0320 09:08:16.902973 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46\": container with ID starting with 8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46 not found: ID does not exist" containerID="8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.903173 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46"} err="failed to get container status \"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46\": rpc error: code = NotFound desc = could not find container \"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46\": container with ID starting with 8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46 not found: ID does not exist" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.903284 5136 scope.go:117] "RemoveContainer" containerID="08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d" Mar 20 09:08:16 crc kubenswrapper[5136]: E0320 09:08:16.903741 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d\": container with ID starting with 08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d not found: ID does not exist" containerID="08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.903789 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d"} err="failed to get container status \"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d\": rpc error: code = NotFound desc = could not find container \"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d\": container with ID starting with 08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d not found: ID does not exist" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.903833 5136 scope.go:117] "RemoveContainer" containerID="a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8" Mar 20 09:08:16 crc kubenswrapper[5136]: E0320 09:08:16.904211 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8\": container with ID starting with a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8 not found: ID does not exist" containerID="a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.904248 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8"} err="failed to get container status \"a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8\": rpc error: code = NotFound desc = could not find container \"a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8\": container with ID starting with a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8 not found: ID does not exist" Mar 20 09:08:17 crc kubenswrapper[5136]: I0320 09:08:17.006611 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:17 crc kubenswrapper[5136]: I0320 09:08:17.397326 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:08:17 crc kubenswrapper[5136]: E0320 09:08:17.397871 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:08:17 crc kubenswrapper[5136]: I0320 09:08:17.841852 5136 generic.go:334] "Generic (PLEG): container finished" podID="21198dc8-ca82-4022-a042-15a080d02f43" containerID="0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce" exitCode=0 Mar 20 09:08:17 crc kubenswrapper[5136]: I0320 09:08:17.841914 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerDied","Data":"0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce"} Mar 20 09:08:17 crc kubenswrapper[5136]: I0320 09:08:17.841939 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerStarted","Data":"404b76685edde1b05718f90ae969f60b35ee2dd05dd9ea25b9aa6b994a8e3f2c"} Mar 20 09:08:18 crc kubenswrapper[5136]: I0320 09:08:18.408573 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" path="/var/lib/kubelet/pods/fbfd87b8-0fff-41eb-a772-a9481ded678f/volumes" Mar 20 09:08:18 crc kubenswrapper[5136]: I0320 09:08:18.853133 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerStarted","Data":"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805"} Mar 20 09:08:22 crc kubenswrapper[5136]: I0320 09:08:22.885250 5136 generic.go:334] "Generic (PLEG): container finished" podID="21198dc8-ca82-4022-a042-15a080d02f43" containerID="8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805" exitCode=0 Mar 20 09:08:22 crc kubenswrapper[5136]: I0320 09:08:22.885578 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerDied","Data":"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805"} Mar 20 09:08:23 crc kubenswrapper[5136]: I0320 09:08:23.894542 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerStarted","Data":"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe"} Mar 20 09:08:23 crc kubenswrapper[5136]: I0320 09:08:23.928988 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dxsd5" podStartSLOduration=2.416432287 podStartE2EDuration="7.928961302s" podCreationTimestamp="2026-03-20 09:08:16 +0000 UTC" firstStartedPulling="2026-03-20 09:08:17.843139482 +0000 UTC m=+8330.102450633" lastFinishedPulling="2026-03-20 09:08:23.355668497 +0000 UTC m=+8335.614979648" observedRunningTime="2026-03-20 09:08:23.911653855 +0000 UTC m=+8336.170965036" watchObservedRunningTime="2026-03-20 09:08:23.928961302 +0000 UTC m=+8336.188272483" Mar 20 09:08:26 crc kubenswrapper[5136]: I0320 09:08:26.563309 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:26 crc kubenswrapper[5136]: I0320 09:08:26.563648 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:27 crc kubenswrapper[5136]: I0320 09:08:27.612490 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dxsd5" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="registry-server" probeResult="failure" output=< Mar 20 09:08:27 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 09:08:27 crc kubenswrapper[5136]: > Mar 20 09:08:29 crc kubenswrapper[5136]: I0320 09:08:29.397443 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:08:29 crc kubenswrapper[5136]: E0320 09:08:29.398010 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:08:36 crc kubenswrapper[5136]: I0320 09:08:36.610858 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:36 crc kubenswrapper[5136]: I0320 09:08:36.662000 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:36 crc kubenswrapper[5136]: I0320 09:08:36.852485 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.024718 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dxsd5" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="registry-server" containerID="cri-o://9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe" gracePeriod=2 Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.406579 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.479890 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities\") pod \"21198dc8-ca82-4022-a042-15a080d02f43\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.479981 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content\") pod \"21198dc8-ca82-4022-a042-15a080d02f43\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.480079 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t29w4\" (UniqueName: \"kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4\") pod \"21198dc8-ca82-4022-a042-15a080d02f43\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.480908 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities" (OuterVolumeSpecName: "utilities") pod "21198dc8-ca82-4022-a042-15a080d02f43" (UID: "21198dc8-ca82-4022-a042-15a080d02f43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.499824 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4" (OuterVolumeSpecName: "kube-api-access-t29w4") pod "21198dc8-ca82-4022-a042-15a080d02f43" (UID: "21198dc8-ca82-4022-a042-15a080d02f43"). InnerVolumeSpecName "kube-api-access-t29w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.581278 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t29w4\" (UniqueName: \"kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.581320 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.610690 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21198dc8-ca82-4022-a042-15a080d02f43" (UID: "21198dc8-ca82-4022-a042-15a080d02f43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.683069 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.039662 5136 generic.go:334] "Generic (PLEG): container finished" podID="21198dc8-ca82-4022-a042-15a080d02f43" containerID="9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe" exitCode=0 Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.039747 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerDied","Data":"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe"} Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.039793 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.039843 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerDied","Data":"404b76685edde1b05718f90ae969f60b35ee2dd05dd9ea25b9aa6b994a8e3f2c"} Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.039891 5136 scope.go:117] "RemoveContainer" containerID="9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.064954 5136 scope.go:117] "RemoveContainer" containerID="8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.094405 5136 scope.go:117] "RemoveContainer" containerID="0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.102464 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.110411 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.138182 5136 scope.go:117] "RemoveContainer" containerID="9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe" Mar 20 09:08:39 crc kubenswrapper[5136]: E0320 09:08:39.139112 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe\": container with ID starting with 9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe not found: ID does not exist" containerID="9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.139201 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe"} err="failed to get container status \"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe\": rpc error: code = NotFound desc = could not find container \"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe\": container with ID starting with 9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe not found: ID does not exist" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.139239 5136 scope.go:117] "RemoveContainer" containerID="8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805" Mar 20 09:08:39 crc kubenswrapper[5136]: E0320 09:08:39.139622 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805\": container with ID starting with 8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805 not found: ID does not exist" containerID="8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.139667 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805"} err="failed to get container status \"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805\": rpc error: code = NotFound desc = could not find container \"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805\": container with ID starting with 8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805 not found: ID does not exist" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.139695 5136 scope.go:117] "RemoveContainer" containerID="0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce" Mar 20 09:08:39 crc kubenswrapper[5136]: E0320 09:08:39.140703 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce\": container with ID starting with 0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce not found: ID does not exist" containerID="0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.140745 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce"} err="failed to get container status \"0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce\": rpc error: code = NotFound desc = could not find container \"0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce\": container with ID starting with 0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce not found: ID does not exist" Mar 20 09:08:40 crc kubenswrapper[5136]: I0320 09:08:40.401343 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:08:40 crc kubenswrapper[5136]: E0320 09:08:40.401512 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:08:40 crc kubenswrapper[5136]: I0320 09:08:40.407799 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21198dc8-ca82-4022-a042-15a080d02f43" path="/var/lib/kubelet/pods/21198dc8-ca82-4022-a042-15a080d02f43/volumes" Mar 20 09:08:44 crc kubenswrapper[5136]: I0320 09:08:44.063971 5136 scope.go:117] "RemoveContainer" containerID="d110e85766974db9b00f23e4ec0b43a5d95e3bc9caa9f95ded6497351baab885" Mar 20 09:08:52 crc kubenswrapper[5136]: I0320 09:08:52.402765 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:08:52 crc kubenswrapper[5136]: E0320 09:08:52.403646 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:09:06 crc kubenswrapper[5136]: I0320 09:09:06.245895 5136 generic.go:334] "Generic (PLEG): container finished" podID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerID="f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32" exitCode=0 Mar 20 09:09:06 crc kubenswrapper[5136]: I0320 09:09:06.245965 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" event={"ID":"ca8086a5-288f-4e6b-80ae-07842239f3a9","Type":"ContainerDied","Data":"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32"} Mar 20 09:09:06 crc kubenswrapper[5136]: I0320 09:09:06.247744 5136 scope.go:117] "RemoveContainer" containerID="f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32" Mar 20 09:09:07 crc kubenswrapper[5136]: I0320 09:09:07.109097 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ppsnr_must-gather-8lzbv_ca8086a5-288f-4e6b-80ae-07842239f3a9/gather/0.log" Mar 20 09:09:07 crc kubenswrapper[5136]: I0320 09:09:07.397051 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:09:07 crc kubenswrapper[5136]: E0320 09:09:07.397303 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.206230 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ppsnr/must-gather-8lzbv"] Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.207039 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="copy" containerID="cri-o://908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307" gracePeriod=2 Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.214049 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ppsnr/must-gather-8lzbv"] Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.679682 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ppsnr_must-gather-8lzbv_ca8086a5-288f-4e6b-80ae-07842239f3a9/copy/0.log" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.680523 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.766016 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output\") pod \"ca8086a5-288f-4e6b-80ae-07842239f3a9\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.766144 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qnw8\" (UniqueName: \"kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8\") pod \"ca8086a5-288f-4e6b-80ae-07842239f3a9\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.774002 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8" (OuterVolumeSpecName: "kube-api-access-7qnw8") pod "ca8086a5-288f-4e6b-80ae-07842239f3a9" (UID: "ca8086a5-288f-4e6b-80ae-07842239f3a9"). InnerVolumeSpecName "kube-api-access-7qnw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.867798 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qnw8\" (UniqueName: \"kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8\") on node \"crc\" DevicePath \"\"" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.889184 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ca8086a5-288f-4e6b-80ae-07842239f3a9" (UID: "ca8086a5-288f-4e6b-80ae-07842239f3a9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.969599 5136 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.335762 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ppsnr_must-gather-8lzbv_ca8086a5-288f-4e6b-80ae-07842239f3a9/copy/0.log" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.337663 5136 generic.go:334] "Generic (PLEG): container finished" podID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerID="908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307" exitCode=143 Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.337714 5136 scope.go:117] "RemoveContainer" containerID="908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.337860 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.359758 5136 scope.go:117] "RemoveContainer" containerID="f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.422431 5136 scope.go:117] "RemoveContainer" containerID="908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307" Mar 20 09:09:15 crc kubenswrapper[5136]: E0320 09:09:15.422981 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307\": container with ID starting with 908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307 not found: ID does not exist" containerID="908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.423010 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307"} err="failed to get container status \"908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307\": rpc error: code = NotFound desc = could not find container \"908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307\": container with ID starting with 908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307 not found: ID does not exist" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.423032 5136 scope.go:117] "RemoveContainer" containerID="f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32" Mar 20 09:09:15 crc kubenswrapper[5136]: E0320 09:09:15.424335 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32\": container with ID starting with f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32 not found: ID does not exist" containerID="f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.424409 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32"} err="failed to get container status \"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32\": rpc error: code = NotFound desc = could not find container \"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32\": container with ID starting with f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32 not found: ID does not exist" Mar 20 09:09:16 crc kubenswrapper[5136]: I0320 09:09:16.405784 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" path="/var/lib/kubelet/pods/ca8086a5-288f-4e6b-80ae-07842239f3a9/volumes" Mar 20 09:09:18 crc kubenswrapper[5136]: I0320 09:09:18.402493 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:09:18 crc kubenswrapper[5136]: E0320 09:09:18.402931 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:09:32 crc kubenswrapper[5136]: I0320 09:09:32.397209 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:09:32 crc kubenswrapper[5136]: E0320 09:09:32.397954 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:09:43 crc kubenswrapper[5136]: I0320 09:09:43.396494 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:09:43 crc kubenswrapper[5136]: E0320 09:09:43.397216 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:09:55 crc kubenswrapper[5136]: I0320 09:09:55.396535 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:09:55 crc kubenswrapper[5136]: E0320 09:09:55.397280 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.139547 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566630-qkw29"] Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140249 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="extract-utilities" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140265 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="extract-utilities" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140279 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="extract-content" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140286 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="extract-content" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140303 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="gather" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140312 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="gather" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140327 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="extract-content" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140366 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="extract-content" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140392 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140400 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140412 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="copy" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140420 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="copy" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140434 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140442 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140452 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="extract-utilities" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140460 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="extract-utilities" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140629 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140642 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="copy" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140661 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140676 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="gather" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.141249 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.144109 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.146731 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566630-qkw29"] Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.148044 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.149105 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.262336 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prnjv\" (UniqueName: \"kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv\") pod \"auto-csr-approver-29566630-qkw29\" (UID: \"e61a3995-2869-48ee-b013-6698bf7a7ec3\") " pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.364060 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prnjv\" (UniqueName: \"kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv\") pod \"auto-csr-approver-29566630-qkw29\" (UID: \"e61a3995-2869-48ee-b013-6698bf7a7ec3\") " pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.382249 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prnjv\" (UniqueName: \"kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv\") pod \"auto-csr-approver-29566630-qkw29\" (UID: \"e61a3995-2869-48ee-b013-6698bf7a7ec3\") " pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.471397 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.904780 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566630-qkw29"] Mar 20 09:10:01 crc kubenswrapper[5136]: I0320 09:10:01.736028 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566630-qkw29" event={"ID":"e61a3995-2869-48ee-b013-6698bf7a7ec3","Type":"ContainerStarted","Data":"16ac9a129f619a136d15dfc6b650e6952c26e592dfdef07e7627b286e34e745f"} Mar 20 09:10:02 crc kubenswrapper[5136]: I0320 09:10:02.744752 5136 generic.go:334] "Generic (PLEG): container finished" podID="e61a3995-2869-48ee-b013-6698bf7a7ec3" containerID="f1027819ae6f51f7f443cda60bb19c5e9832f7b10ca947cb4577ab36caa8c50e" exitCode=0 Mar 20 09:10:02 crc kubenswrapper[5136]: I0320 09:10:02.745143 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566630-qkw29" event={"ID":"e61a3995-2869-48ee-b013-6698bf7a7ec3","Type":"ContainerDied","Data":"f1027819ae6f51f7f443cda60bb19c5e9832f7b10ca947cb4577ab36caa8c50e"} Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.038437 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.118748 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prnjv\" (UniqueName: \"kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv\") pod \"e61a3995-2869-48ee-b013-6698bf7a7ec3\" (UID: \"e61a3995-2869-48ee-b013-6698bf7a7ec3\") " Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.125318 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv" (OuterVolumeSpecName: "kube-api-access-prnjv") pod "e61a3995-2869-48ee-b013-6698bf7a7ec3" (UID: "e61a3995-2869-48ee-b013-6698bf7a7ec3"). InnerVolumeSpecName "kube-api-access-prnjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.220512 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prnjv\" (UniqueName: \"kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv\") on node \"crc\" DevicePath \"\"" Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.761072 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566630-qkw29" event={"ID":"e61a3995-2869-48ee-b013-6698bf7a7ec3","Type":"ContainerDied","Data":"16ac9a129f619a136d15dfc6b650e6952c26e592dfdef07e7627b286e34e745f"} Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.761106 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ac9a129f619a136d15dfc6b650e6952c26e592dfdef07e7627b286e34e745f" Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.761106 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:05 crc kubenswrapper[5136]: I0320 09:10:05.113068 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566624-n9gpj"] Mar 20 09:10:05 crc kubenswrapper[5136]: I0320 09:10:05.118893 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566624-n9gpj"] Mar 20 09:10:06 crc kubenswrapper[5136]: I0320 09:10:06.410098 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c69bbe6-8752-4d39-b2e4-2eab9134dbda" path="/var/lib/kubelet/pods/2c69bbe6-8752-4d39-b2e4-2eab9134dbda/volumes" Mar 20 09:10:08 crc kubenswrapper[5136]: I0320 09:10:08.400907 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:10:08 crc kubenswrapper[5136]: E0320 09:10:08.401190 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:10:19 crc kubenswrapper[5136]: I0320 09:10:19.396694 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:10:19 crc kubenswrapper[5136]: E0320 09:10:19.397422 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:10:32 crc kubenswrapper[5136]: I0320 09:10:32.397522 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:10:32 crc kubenswrapper[5136]: E0320 09:10:32.398601 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:10:43 crc kubenswrapper[5136]: I0320 09:10:43.396745 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:10:43 crc kubenswrapper[5136]: E0320 09:10:43.397547 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:10:44 crc kubenswrapper[5136]: I0320 09:10:44.227547 5136 scope.go:117] "RemoveContainer" containerID="35e33276bd939043cf0f403b9a2e455c0ebe9937a874e7a190c199a2c2c31266" Mar 20 09:10:57 crc kubenswrapper[5136]: I0320 09:10:57.397261 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:10:57 crc kubenswrapper[5136]: E0320 09:10:57.398189 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:11:09 crc kubenswrapper[5136]: I0320 09:11:09.396526 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:11:09 crc kubenswrapper[5136]: E0320 09:11:09.397072 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:11:21 crc kubenswrapper[5136]: I0320 09:11:21.396802 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:11:21 crc kubenswrapper[5136]: E0320 09:11:21.397673 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:11:32 crc kubenswrapper[5136]: I0320 09:11:32.396984 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:11:32 crc kubenswrapper[5136]: E0320 09:11:32.397989 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:11:44 crc kubenswrapper[5136]: I0320 09:11:44.286728 5136 scope.go:117] "RemoveContainer" containerID="cc5de32d3b2f3f78d2de7c2c04c373c815b2dc521f0ac4c71c7880cc929ccffe" Mar 20 09:11:44 crc kubenswrapper[5136]: I0320 09:11:44.397470 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:11:44 crc kubenswrapper[5136]: E0320 09:11:44.397707 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:11:57 crc kubenswrapper[5136]: I0320 09:11:57.397026 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:11:57 crc kubenswrapper[5136]: E0320 09:11:57.397714 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.139446 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566632-8vsxg"] Mar 20 09:12:00 crc kubenswrapper[5136]: E0320 09:12:00.140010 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61a3995-2869-48ee-b013-6698bf7a7ec3" containerName="oc" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.140025 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61a3995-2869-48ee-b013-6698bf7a7ec3" containerName="oc" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.140182 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61a3995-2869-48ee-b013-6698bf7a7ec3" containerName="oc" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.140676 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.143712 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.143862 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.147415 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.159097 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566632-8vsxg"] Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.243006 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5dwq\" (UniqueName: \"kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq\") pod \"auto-csr-approver-29566632-8vsxg\" (UID: \"626f6cf8-8639-4b3a-a616-d9e67bcfed6a\") " pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.344426 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5dwq\" (UniqueName: \"kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq\") pod \"auto-csr-approver-29566632-8vsxg\" (UID: \"626f6cf8-8639-4b3a-a616-d9e67bcfed6a\") " pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.366619 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5dwq\" (UniqueName: \"kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq\") pod \"auto-csr-approver-29566632-8vsxg\" (UID: \"626f6cf8-8639-4b3a-a616-d9e67bcfed6a\") " pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.458926 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.849312 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566632-8vsxg"] Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.858050 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 09:12:01 crc kubenswrapper[5136]: I0320 09:12:01.635398 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" event={"ID":"626f6cf8-8639-4b3a-a616-d9e67bcfed6a","Type":"ContainerStarted","Data":"49448043a17356eed0644eeff019252729dc712397ebbb1aa5df5ad7accb2043"} Mar 20 09:12:02 crc kubenswrapper[5136]: I0320 09:12:02.642826 5136 generic.go:334] "Generic (PLEG): container finished" podID="626f6cf8-8639-4b3a-a616-d9e67bcfed6a" containerID="04787b0e292ba80df64d898abc8997ab8fd28e7f944e47ef2ca1271229038838" exitCode=0 Mar 20 09:12:02 crc kubenswrapper[5136]: I0320 09:12:02.642868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" event={"ID":"626f6cf8-8639-4b3a-a616-d9e67bcfed6a","Type":"ContainerDied","Data":"04787b0e292ba80df64d898abc8997ab8fd28e7f944e47ef2ca1271229038838"} Mar 20 09:12:03 crc kubenswrapper[5136]: I0320 09:12:03.916126 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:03 crc kubenswrapper[5136]: I0320 09:12:03.998229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5dwq\" (UniqueName: \"kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq\") pod \"626f6cf8-8639-4b3a-a616-d9e67bcfed6a\" (UID: \"626f6cf8-8639-4b3a-a616-d9e67bcfed6a\") " Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.003898 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq" (OuterVolumeSpecName: "kube-api-access-g5dwq") pod "626f6cf8-8639-4b3a-a616-d9e67bcfed6a" (UID: "626f6cf8-8639-4b3a-a616-d9e67bcfed6a"). InnerVolumeSpecName "kube-api-access-g5dwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.100032 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5dwq\" (UniqueName: \"kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq\") on node \"crc\" DevicePath \"\"" Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.668300 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" event={"ID":"626f6cf8-8639-4b3a-a616-d9e67bcfed6a","Type":"ContainerDied","Data":"49448043a17356eed0644eeff019252729dc712397ebbb1aa5df5ad7accb2043"} Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.668669 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49448043a17356eed0644eeff019252729dc712397ebbb1aa5df5ad7accb2043" Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.668328 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.978539 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566626-v5t2c"] Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.984268 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566626-v5t2c"] Mar 20 09:12:06 crc kubenswrapper[5136]: I0320 09:12:06.407326 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" path="/var/lib/kubelet/pods/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0/volumes" Mar 20 09:12:11 crc kubenswrapper[5136]: I0320 09:12:11.396645 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:12:11 crc kubenswrapper[5136]: E0320 09:12:11.397485 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba"